Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Step10: Configure parameters
The following table shows parameters that are common to all Vertex Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| display-name | string | Name of the job. | Yes |
| worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes |
| region | string | Region to submit the job to. | No |
The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes |
| replica-count | int | The number of replicas of the machine in the pool. | No |
| container-image-uri | string | Docker image to run on each worker. | No |
The following table shows the parameters for the two-tower model training job
Step11: Train on Vertex Training
Submit the two-tower training job to Vertex Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set.
Step13: If you want to train using GPUs, you need to write configuration to a YAML file
Step14: If you want to use TFRecord input file format, you can try the following command
Step15: After the job is submitted successfully, you can view its details and logs
Step16: When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below
Step17: For Google CLoud Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell.
In your Cloud Shell, launch Tensorboard on port 8080
Step18: Deploy on Vertex Prediction
Import the model
Our training job will export two TF SavedModels under gs
Step19: Deploy the model
After importing the model, you must deploy it to an endpoint so that you can get online predictions. More information about this process can be found in the official documentation.
Step20: Create a model endpoint
Step21: Deploy model to the endpoint
Step22: Predict
Now that you have deployed the query/candidate encoder model on Vertex Prediction, you can call the model to calculate embeddings for live data. There are two methods of getting predictions, online and batch, which are shown below.
Online prediction
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed Vertex Prediction model endpoint using Vertex SDK for Python
Step23: You can also do online prediction using the gcloud CLI, as shown below
Step24: Batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
Step25: The following function calls the deployed Vertex Prediction model using the sample query object input file. Note that it uses the model resource directly and doesn't require a deployed endpoint. Once you start the job, you can track its status on the Cloud Console.
Step26: Hyperparameter tuning
After successfully training your model, deploying it, and calling it to make predictions, you may want to optimize the hyperparameters used during training to improve your model's accuracy and performance. See the Vertex AI documentation for an overview of hyperparameter tuning and how to use it in your Vertex Training jobs.
For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate.
Step27: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade tensorflow
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile
! gcloud components update --quiet
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/matching_engine/two-tower-model-introduction.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/matching_engine/two-tower-model-introduction.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This tutorial demonstrates how to use the Two-Tower built-in algorithm on the Vertex AI platform.
Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candidate object, since when paired with a nearest neighbor search service such as Vertex Matching Engine, the two-tower model can retrieve candidate objects related to an input query object. These objects are encoded by a query and candidate encoder (the two "towers") respectively, which are trained on pairs of relevant items. This built-in algorithm exports trained query and candidate encoders as model artifacts, which can be deployed in Vertex Prediction for usage in a recommendation system.
Dataset
This tutorial uses the movielens_100k sample dataset in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/two-tower, which was generated from the MovieLens movie rating dataset. For simplicity, the data for this tutorial only includes the user id feature for users, and the movie id and movie title features for movies. In this example, the user is the query object and the movie is the candidate object, and each training example in the dataset contains a user and a movie they rated (we only include positive ratings in the dataset). The two-tower model will embed the user and the movie in the same embedding space, so that given a user, the model will recommend movies it thinks the user will like.
Objective
In this notebook, you will learn how to run the two-tower model.
The tutorial covers the following steps:
1. Setup: Importing the required libraries and setting your global variables.
2. Configure parameters: Setting the appropriate parameter values for the training job.
3. Train on Vertex Training: Submitting a training job.
4. Deploy on Vertex Prediction: Importing and deploying the trained model to a callable endpoint.
5. Predict: Calling the deployed endpoint using online or batch prediction.
6. Hyperparameter tuning: Running a hyperparameter tuning job.
7. Cleaning up: Deleting resources created by this tutorial.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you do not know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project {PROJECT_ID}
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import os
import re
import time
from google.cloud import aiplatform
%load_ext tensorboard
Explanation: Import libraries and define constants
End of explanation
DATASET_NAME = "movielens_100k" # Change to your dataset name.
# Change to your data and schema paths. These are paths to the movielens_100k
# sample data.
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*"
INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/input_schema.json"
# URI of the two-tower training Docker image.
LEARNER_IMAGE_URI = "us-docker.pkg.dev/vertex-ai-restricted/builtin-algorithm/two-tower"
# Change to your output location.
OUTPUT_DIR = f"{BUCKET_NAME}/experiment/output"
TRAIN_BATCH_SIZE = 100 # Batch size for training.
NUM_EPOCHS = 3 # Number of epochs for training.
print(f"Dataset name: {DATASET_NAME}")
print(f"Training data path: {TRAINING_DATA_PATH}")
print(f"Input schema path: {INPUT_SCHEMA_PATH}")
print(f"Output directory: {OUTPUT_DIR}")
print(f"Train batch size: {TRAIN_BATCH_SIZE}")
print(f"Number of epochs: {NUM_EPOCHS}")
Explanation: Configure parameters
The following table shows parameters that are common to all Vertex Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| display-name | string | Name of the job. | Yes |
| worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes |
| region | string | Region to submit the job to. | No |
The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool.
| Parameter | Data type | Description | Required |
|--|--|--|--|
| machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes |
| replica-count | int | The number of replicas of the machine in the pool. | No |
| container-image-uri | string | Docker image to run on each worker. | No |
The following table shows the parameters for the two-tower model training job:
| Parameter | Data type | Description | Required |
|--|--|--|--|
| training_data_path | string | Cloud Storage pattern where training data is stored. | Yes |
| input_schema_path | string | Cloud Storage path where the JSON input schema is stored. | Yes |
| input_file_format | string | The file format of input. Currently supports jsonl and tfrecord. | No - default is jsonl. |
| job_dir | string | Cloud Storage directory where the model output files will be stored. | Yes |
| eval_data_path | string | Cloud Storage pattern where eval data is stored. | No |
| candidate_data_path | string | Cloud Storage pattern where candidate data is stored. Only used for top_k_categorical_accuracy metrics. If not set, it's generated from training/eval data. | No |
| train_batch_size | int | Batch size for training. | No - Default is 100. |
| eval_batch_size | int | Batch size for evaluation. | No - Default is 100. |
| eval_split | float | Split fraction to use for the evaluation dataset, if eval_data_path is not provided. | No - Default is 0.2 |
| optimizer | string | Training optimizer. Lowercase string name of any TF2.3 Keras optimizer is supported ('sgd', 'nadam', 'ftrl', etc.). See TensorFlow documentation. | No - Default is 'adagrad'. |
| learning_rate | float | Learning rate for training. | No - Default is the default learning rate of the specified optimizer. |
| momentum | float | Momentum for optimizer, if specified. | No - Default is the default momentum value for the specified optimizer. |
| metrics | string | Metrics used to evaluate the model. Can be either auc, top_k_categorical_accuracy or precision_at_1. | No - Default is auc. |
| num_epochs | int | Number of epochs for training. | No - Default is 10. |
| num_hidden_layers | int | Number of hidden layers. | No |
| num_nodes_hidden_layer{index} | int | Num of nodes in hidden layer {index}. The range of index is 1 to 20. | No |
| output_dim | int | The output embedding dimension for each encoder tower of the two-tower model. | No - Default is 64. |
| training_steps_per_epoch | int | Number of steps per epoch to run the training for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| eval_steps_per_epoch | int | Number of steps per epoch to run the evaluation for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. |
| gpu_memory_alloc | int | Amount of memory allocated per GPU (in MB). | No - Default is no limit. |
End of explanation
learning_job_name = f"two_tower_cpu_{DATASET_NAME}_{TIMESTAMP}"
CREATION_LOG = ! gcloud ai custom-jobs create \
--display-name={learning_job_name} \
--worker-pool-spec=machine-type=n1-standard-8,replica-count=1,container-image-uri={LEARNER_IMAGE_URI} \
--region={REGION} \
--args=--training_data_path={TRAINING_DATA_PATH} \
--args=--input_schema_path={INPUT_SCHEMA_PATH} \
--args=--job-dir={OUTPUT_DIR} \
--args=--train_batch_size={TRAIN_BATCH_SIZE} \
--args=--num_epochs={NUM_EPOCHS}
print(CREATION_LOG)
Explanation: Train on Vertex Training
Submit the two-tower training job to Vertex Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set.
End of explanation
learning_job_name = f"two_tower_gpu_{DATASET_NAME}_{TIMESTAMP}"
config = fworkerPoolSpecs:
-
machineSpec:
machineType: n1-highmem-4
acceleratorType: NVIDIA_TESLA_K80
acceleratorCount: 1
replicaCount: 1
containerSpec:
imageUri: {LEARNER_IMAGE_URI}
args:
- --training_data_path={TRAINING_DATA_PATH}
- --input_schema_path={INPUT_SCHEMA_PATH}
- --job-dir={OUTPUT_DIR}
- --training_steps_per_epoch=1500
- --eval_steps_per_epoch=1500
!echo $'{config}' > ./config.yaml
CREATION_LOG = ! gcloud ai custom-jobs create \
--display-name={learning_job_name} \
--region={REGION} \
--config=config.yaml
print(CREATION_LOG)
Explanation: If you want to train using GPUs, you need to write configuration to a YAML file:
End of explanation
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*"
learning_job_name = f"two_tower_cpu_tfrecord_{DATASET_NAME}_{TIMESTAMP}"
CREATION_LOG = ! gcloud ai custom-jobs create \
--display-name={learning_job_name} \
--worker-pool-spec=machine-type=n1-standard-8,replica-count=1,container-image-uri={LEARNER_IMAGE_URI} \
--region={REGION} \
--args=--training_data_path={TRAINING_DATA_PATH} \
--args=--input_schema_path={INPUT_SCHEMA_PATH} \
--args=--job-dir={OUTPUT_DIR} \
--args=--train_batch_size={TRAIN_BATCH_SIZE} \
--args=--num_epochs={NUM_EPOCHS} \
--args=--input_file_format=tfrecord
print(CREATION_LOG)
Explanation: If you want to use TFRecord input file format, you can try the following command:
End of explanation
JOB_ID = re.search(r"(?<=/customJobs/)\d+", CREATION_LOG[1]).group(0)
print(JOB_ID)
# View the job's configuration and state.
STATE = "state: JOB_STATE_PENDING"
while STATE not in ["state: JOB_STATE_SUCCEEDED", "state: JOB_STATE_FAILED"]:
DESCRIPTION = ! gcloud ai custom-jobs describe {JOB_ID} --region={REGION}
STATE = DESCRIPTION[-2]
print(STATE)
time.sleep(60)
Explanation: After the job is submitted successfully, you can view its details and logs:
End of explanation
TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard")
%tensorboard --logdir {TENSORBOARD_DIR}
Explanation: When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below:
End of explanation
! gsutil ls {OUTPUT_DIR}
Explanation: For Google CLoud Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell.
In your Cloud Shell, launch Tensorboard on port 8080:
export TENSORBOARD_DIR=gs://xxxxx/tensorboard
tensorboard --logdir=${TENSORBOARD_DIR} --port=8080
Click the "Web Preview" button at the top-right of the Cloud Shell window (looks like an eye in a rectangle).
Select "Preview on port 8080". This should launch the TensorBoard webpage in a new tab in your browser.
After the job finishes successfully, you can view the output directory:
End of explanation
# The following imports the query (user) encoder model.
MODEL_TYPE = "query"
# Use the following instead to import the candidate (movie) encoder model.
# MODEL_TYPE = 'candidate'
DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model.
MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment container.
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_NAME,
)
model = aiplatform.Model.upload(
display_name=DISPLAY_NAME,
artifact_uri=OUTPUT_DIR,
serving_container_image_uri="us-central1-docker.pkg.dev/cloud-ml-algos/two-tower/deploy",
serving_container_health_route=f"/v1/models/{MODEL_NAME}",
serving_container_predict_route=f"/v1/models/{MODEL_NAME}:predict",
serving_container_environment_variables={
"MODEL_BASE_PATH": "$(AIP_STORAGE_URI)",
"MODEL_NAME": MODEL_NAME,
},
)
Explanation: Deploy on Vertex Prediction
Import the model
Our training job will export two TF SavedModels under gs://<job_dir>/query_model and gs://<job_dir>/candidate_model. These exported models can be used for online or batch prediction in Vertex Prediction. First, import the query (or candidate) model:
End of explanation
! gcloud ai models list --region={REGION} --filter={DISPLAY_NAME}
Explanation: Deploy the model
After importing the model, you must deploy it to an endpoint so that you can get online predictions. More information about this process can be found in the official documentation.
End of explanation
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
Explanation: Create a model endpoint:
End of explanation
model.deploy(
endpoint=endpoint,
machine_type="n1-standard-4",
traffic_split={"0": 100},
deployed_model_display_name=DISPLAY_NAME,
)
Explanation: Deploy model to the endpoint
End of explanation
# Input items for the query model:
input_items = [
{"data": '{"user_id": ["1"]}', "key": "key1"},
{"data": '{"user_id": ["2"]}', "key": "key2"},
]
# Input items for the candidate model:
# input_items = [{
# 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}',
# 'key': 'key1'
# }]
encodings = endpoint.predict(input_items)
print(f"Number of encodings: {len(encodings.predictions)}")
print(encodings.predictions[0]["encoding"])
Explanation: Predict
Now that you have deployed the query/candidate encoder model on Vertex Prediction, you can call the model to calculate embeddings for live data. There are two methods of getting predictions, online and batch, which are shown below.
Online prediction
Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed Vertex Prediction model endpoint using Vertex SDK for Python:
The input data you want predictions on should be provided as a stringified JSON in the data field. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
End of explanation
request = json.dumps({"instances": input_items})
with open("request.json", "w") as writer:
writer.write(f"{request}\n")
ENDPOINT_ID = endpoint.resource_name
! gcloud ai endpoints predict {ENDPOINT_ID} \
--region={REGION} \
--json-request=request.json
Explanation: You can also do online prediction using the gcloud CLI, as shown below:
End of explanation
QUERY_SAMPLE_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/query_sample.jsonl"
! gsutil cat {QUERY_SAMPLE_PATH}
Explanation: Batch prediction
Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine.
The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
End of explanation
model.batch_predict(
job_display_name=f"batch_predict_{DISPLAY_NAME}",
gcs_source=[QUERY_SAMPLE_PATH],
gcs_destination_prefix=OUTPUT_DIR,
machine_type="n1-standard-4",
starting_replica_count=1,
)
Explanation: The following function calls the deployed Vertex Prediction model using the sample query object input file. Note that it uses the model resource directly and doesn't require a deployed endpoint. Once you start the job, you can track its status on the Cloud Console.
End of explanation
PARALLEL_TRIAL_COUNT = 4
MAX_TRIAL_COUNT = 8
METRIC = "val_auc"
hyper_tune_job_name = f"hyper_tune_{DATASET_NAME}_{TIMESTAMP}"
config = json.dumps(
{
"displayName": hyper_tune_job_name,
"studySpec": {
"metrics": [{"metricId": METRIC, "goal": "MAXIMIZE"}],
"parameters": [
{
"parameterId": "num_hidden_layers",
"scaleType": "UNIT_LINEAR_SCALE",
"integerValueSpec": {"minValue": 0, "maxValue": 2},
"conditionalParameterSpecs": [
{
"parameterSpec": {
"parameterId": "num_nodes_hidden_layer1",
"scaleType": "UNIT_LOG_SCALE",
"integerValueSpec": {"minValue": 1, "maxValue": 128},
},
"parentIntValues": {"values": [1, 2]},
},
{
"parameterSpec": {
"parameterId": "num_nodes_hidden_layer2",
"scaleType": "UNIT_LOG_SCALE",
"integerValueSpec": {"minValue": 1, "maxValue": 128},
},
"parentIntValues": {"values": [2]},
},
],
},
{
"parameterId": "learning_rate",
"scaleType": "UNIT_LOG_SCALE",
"doubleValueSpec": {"minValue": 0.0001, "maxValue": 1.0},
},
],
"algorithm": "ALGORITHM_UNSPECIFIED",
},
"maxTrialCount": MAX_TRIAL_COUNT,
"parallelTrialCount": PARALLEL_TRIAL_COUNT,
"maxFailedTrialCount": 3,
"trialJobSpec": {
"workerPoolSpecs": [
{
"machineSpec": {
"machineType": "n1-standard-4",
},
"replicaCount": 1,
"containerSpec": {
"imageUri": LEARNER_IMAGE_URI,
"args": [
f"--training_data_path={TRAINING_DATA_PATH}",
f"--input_schema_path={INPUT_SCHEMA_PATH}",
f"--job-dir={OUTPUT_DIR}",
],
},
}
]
},
}
)
! curl -X POST -H "Authorization: Bearer "$(gcloud auth print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d '{config}' https://us-central1-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/hyperparameterTuningJobs
Explanation: Hyperparameter tuning
After successfully training your model, deploying it, and calling it to make predictions, you may want to optimize the hyperparameters used during training to improve your model's accuracy and performance. See the Vertex AI documentation for an overview of hyperparameter tuning and how to use it in your Vertex Training jobs.
For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate.
End of explanation
# Delete endpoint resource
endpoint.delete(force=True)
# Delete model resource
model.delete()
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $OUTPUT_DIR
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
14,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Reinforcement Learning
Simple examples of RL using Tensorflow and OpenAI gym
Create cartpole environment
Step1: Check action and observation spaces dimensions
Step2: Check observation space boundaries
Step3: I would guess the dimensions are
Step4: Because the action space is discrete, we only need to take the sign of the linear regression output. In this case the weights can be sampled uniformly in [-1, 1]. However, if we want to use the bias term, it should be sampled according to the boundaries of the observation space (?)
Step5: Visualize the observation space
Sample observations using random policies
Step6: assign actions to each observation using the best performing model
Step7: The hill-climbing algorithm
Step8: '0' bin means the algorithm did not converge
Policy gradient algorithm
Step9: build graph
Step11: create session | Python Code:
import gym
env = gym.make('CartPole-v0')
Explanation: Basic Reinforcement Learning
Simple examples of RL using Tensorflow and OpenAI gym
Create cartpole environment
End of explanation
env.action_space, env.observation_space
Explanation: Check action and observation spaces dimensions
End of explanation
list(zip(env.observation_space.low, env.observation_space.high))
Explanation: Check observation space boundaries
End of explanation
import numpy as np
from tqdm import tqdm
N_MODELS = 10000 # models to try
MAX_STEPS = 200 # steps per episodes
Explanation: I would guess the dimensions are:
cart's horizontal position
cart's speed along horizontal position
pole's angle
pole's angular speed
The random guessing algorithm
We will use linear regression model to find the optimal policy (actions) given observation states. We will perform random search to find the best parameters.
End of explanation
# generate model weights: [n_models, n_params]
models = 2 * np.random.random([N_MODELS, env.observation_space.shape[0]]) - 1 # uniform in [-1, +1]
class LinearAgent(object):
def __init__(self, weights=None):
self._weights = weights
@property
def weights(self):
return self._weights
@weights.setter
def weights(self, new_weights):
self._weights = new_weights
def action(self, state):
a = np.dot(self._weights, state)
return int(a > 0)
def play_episode(env, agent, max_steps=MAX_STEPS):
observations = []
rewards = []
o = env.reset()
for t in range(max_steps):
a = agent.action(o)
o, r, done, info = env.step(a)
rewards.append(r)
observations.append(o)
if done:
break
return observations, rewards
def evaluate_agent(env, agent, n_experiments=20, max_steps=MAX_STEPS):
episode_rewards = []
for e in range(n_experiments):
observations, rewards = play_episode(env, agent, max_steps)
avg_reward = np.array(rewards).sum() / max_steps
episode_rewards.append(avg_reward)
episode_rewards = np.array(episode_rewards)
return episode_rewards.sum(), episode_rewards.mean()
model_metrics = np.zeros((models.shape[0], 2)) # total and mean reward for each model
for i, m in tqdm(enumerate(models)):
agent = LinearAgent(m)
model_metrics[i, :] = evaluate_agent(env, agent)
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.hist(model_metrics[:,1], bins=50)
plt.title('Distribution of expected cumulative reward')
plt.xlabel('mean reward')
plt.show()
print('% of successful models', (model_metrics[:,1] == 1.0).mean())
Explanation: Because the action space is discrete, we only need to take the sign of the linear regression output. In this case the weights can be sampled uniformly in [-1, 1]. However, if we want to use the bias term, it should be sampled according to the boundaries of the observation space (?)
End of explanation
observations = []
for m_id in np.random.randint(0, models.shape[0], size=200):
agent = LinearAgent(models[m_id])
os, rs = play_episode(env, agent)
observations.extend(os)
observations = np.array(observations)
Explanation: Visualize the observation space
Sample observations using random policies
End of explanation
# pick the best model
m_id = model_metrics.argmax(axis=0)[1]
weights = models[m_id]
# assign actions
actions = np.dot(observations, weights)
actions = (actions > 0).astype(int)
from sklearn.decomposition import PCA
pca = PCA(n_components=2, random_state=0)
observations_pca = pca.fit_transform(observations)
plt.scatter(observations_pca[:,0], observations_pca[:,1], s=3, c=actions)
plt.title('PCA projection of the observation space')
plt.show()
Explanation: assign actions to each observation using the best performing model
End of explanation
n_experiments = 500
noise_scale = 1.0
experiment_steps = np.zeros(n_experiments)
for i in tqdm(range(n_experiments)):
old_weights = 2 * np.random.random(env.observation_space.shape) - 1 # uniform in [-1, +1]
agent = LinearAgent()
r_best = 0.0
for step in range(MAX_STEPS):
noise = np.random.randn(*old_weights.shape)
new_weights = old_weights + noise_scale * noise
agent.weights = new_weights
r_total, r_mean = evaluate_agent(env, agent)
if r_mean == 1.0:
experiment_steps[i] = step + 1
break
if r_mean > r_best:
r_best = r_mean
old_weights = new_weights
plt.hist(experiment_steps, bins=40)
plt.title('Distribution of steps for hill-climbing algorithm')
plt.xlabel('steps')
plt.show()
Explanation: The hill-climbing algorithm
End of explanation
import tensorflow as tf
tf.reset_default_graph()
n_state_dims = env.observation_space.shape[0]
n_action_dims = env.action_space.shape[0]
n_state_dims, n_action_dims
Explanation: '0' bin means the algorithm did not converge
Policy gradient algorithm
End of explanation
with tf.name_scope('policy'):
states_in = tf.placeholder(tf.float32, [None, n_state_dims], name='states')
W = tf.Variable(tf.truncated_normal([n_state_dims, n_action_dims], stddev=0.1), name='W')
# b = tf.Variable(tf.constant(0.0, shape=[n_action_dims]), name='b')
logits = tf.matmul(states_in, W)
probs = tf.nn.softmax(logits, name='probabilities')
with tf.name_scope('policy_gradient'):
actions_in = tf.placeholder(tf.int32, [None], name='actions')
rewards_in = tf.placeholder(tf.float32, [None], name='cumulative_rewards')
actions_one_hot = tf.one_hot(actions_in, n_action_dims)
# because the actions are mutually exclusive it should be ok to use cross-entropy
entropies = tf.nn.softmax_cross_entropy_with_logits(logits, actions_one_hot)
weighted_entropies = tf.mul(entropies, rewards_in)
loss = tf.reduce_sum(weighted_entropies, name='loss')
with tf.name_scope('optimizer'):
global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = tf.train.AdamOptimizer(0.003).minimize(loss, global_step=global_step)
Explanation: build graph
End of explanation
init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)
def policy(state):
"Return sampled action given state"
a_probs = sess.run(probs, feed_dict={states_in: [state]})[0]
a = np.random.choice(2, p=a_probs)
return a
def policy_grad_descent(states, actions, cumulative_rewards):
"Run single gradient update and return loss"
feed_dict = {states_in: states, actions_in: actions, rewards_in: cumulative_rewards}
_, loss_val = sess.run([train_op, loss], feed_dict=feed_dict)
return loss_val
def play_policy(env, policy, n_trajectories=20, n_steps=MAX_STEPS):
"Generate trajectories given policy"
trajectories = []
for i in range(n_trajectories):
s = env.reset()
trajectory = []
for step in range(n_steps):
a = policy(s)
new_s, r, done, info = env.step(a)
trajectory.append([s, a, r])
if done:
break
s = new_s
trajectories.append(trajectory)
return trajectories
def calc_cumulative_rewards(rewards, gamma=0.99):
Given rewards at each step calculate
discounted cumulative future reward for each step
cumulative_rewards = np.zeros_like(rewards)
discounts = np.power(gamma, np.arange(len(rewards)))
for l in range(len(rewards)):
future_rewards = rewards[l:]
discounted = (future_rewards * discounts[:len(future_rewards)])
cumulative_rewards[l] = discounted.sum()
return cumulative_rewards
def update_policy(trajectories, policy_grad_descent, n_steps=MAX_STEPS):
"Run policy gradient descent on each trajectory"
losses = []
total_rewards = []
for trajectory in trajectories:
states, actions, rewards = zip(*trajectory)
cumulative_rewards = calc_cumulative_rewards(rewards)
loss = policy_grad_descent(states, actions, cumulative_rewards)
losses.append(loss / n_steps)
total_reward = np.sum(rewards) / n_steps
total_rewards.append(total_reward)
return np.mean(losses), np.mean(total_rewards)
n_epochs = 100
avg_losses = []
avg_rewards = []
for _ in tqdm(range(n_epochs)):
trajectories = play_policy(env, policy)
avg_loss, avg_reward = update_policy(trajectories, policy_grad_descent)
avg_losses.append(avg_loss)
avg_rewards.append(avg_reward)
if avg_reward == 1.0:
print('early stopping')
break
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(avg_losses, 'r')
ax2.plot(avg_rewards, 'g')
ax1.set_xlabel('epochs')
ax1.set_ylabel('total loss', color='r')
ax2.set_ylabel('avg rewards', color='g')
plt.show()
Explanation: create session
End of explanation |
14,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
О вероятности попасть под удар фигуры, поставленной случайным образом на шахматную доску
На шахматную доску случайным образом поставлены две фигуры. С какой вероятностью первая фигура бьёт вторую? В данном документе представлен расчёт этой вероятности для каждой шахматной фигуры как функции от размера доски. Рассмотрены только квадратные доски. Фигуры полагаются поставленными одновременно (обязательно стоят на разных клетках), выбор каждого из полей равновероятен.
Степень (валентность) вершины $v$ графа $G$ - количество рёбер графа $G$, инцидентных вершине $v$.
Граф ходов шахматной фигуры (далее Граф) - граф, изображающий все возможные ходы фигуры на шахматной доске - каждая вершина соответствует клетке на доске, а рёбра соответствуют возможным ходам.
Тогда валентность вершины Графа - количество полей, которые бьёт фигура, будучи поставленной на соответствующую этой вершине клетку. В целях упрощения речи далее в тексте используется формулировка "клетка имеет валентность", однако понятно, что валентность имеет не клетка, а соответствующая ей вершина в Графе.
Если событие $X$ может произойти только при выполнении одного из событий $H_1, H_2,..., H_n$, которые образуют полную группу несовместных событий, то вероятность $P(X)$ вычисляется по формуле
Step1: Пешка
Step2: Рассмотрим несколько частных случаев в поисках закономерности.
Step3: Закономерность очевидна - всегда присутствует горизонталь (верхняя или нижняя - в зависимости от цвета фигуры), с которой пешка не бьёт ни одну клетку - все поля этой горизонтали $0$-валентны. Их количество равно $n$.
На крайних вертикалях расположены $1$-валентные клетки, которых $2(n-1)$ штук.
Все остальные поля - $2$-валентны, и расположены они прямоугольником размера $(n-1)\times(n-2)$.
Тогда $$ P(X_{pawn}) = \frac{n\cdot 0}{n^{2}(n^{2}-1)} + \frac{2(n-1)\cdot 1}{n^{2}(n^{2}-1)} + \frac{(n-1)(n-2)\cdot 2}{n^{2}(n^{2}-1)}= \frac{2(n-1)({\color{green}1}+n-{\color{green}2})}{n^{2}(n^{2}-1)} = \frac{2(n-1)^{2}}{n^{2}(n^{2}-1)}. $$ Так как $(n^{2}-1) = (n+1)(n-1)$, $$ P(X_{pawn}) = \frac{2(n-1)}{n^{2}(n+1)}. $$
Конь
Step4: Количество $2$- и $3$-валентных клеток фиксировано при любом $n\geq 4$. Первые расположены в углах, а вторые прилегают к ним по вертикали и горизонтали. Стало быть, количество $2$-валентных клеток равно $4$, а $3$-валентных - $8$, вдвое больше. $4$-валентные клетки образуют арифметическую прогрессию с начальным элементом $4$ и шагом $4$ для всех $n\geq 4$ (при увеличении $n$ на единицу с каждой стороны появляется одна $4$-валентная клетка). Легко видеть, что рост количества $6$-валентных клеток устроен аналогично, однако существуют они только при $n\geq 5$. Таким образом, $4$-валентных клеток $4(n-3)$, а $6$-валентных клеток - $4(n-4)$ штук. Количество $8$-валентных клеток растёт квадратично, к тому же, они существуют только при $n\geq 5$. То есть, их количество - $(n-4)^2$. Итого имеем
Step5: Видно, что эквивалентные клетки располагаются по периметрам образованных ими концентрических квадратов. Поскольку при чётных $n$ в центре доски расположены $4$ поля с максимальной валентностью, а при нечётных - одно, случаи чётных и нечётных $n$ представляется удобным рассмотреть раздельно.
Чётные $n$
Каково количество различных значений валентности, а также их величина? Наименьшее значение равно $(n-1)$, так как это количество клеток на диагонали минус клетка, на которой стоит сама фигура. Наибольшее значение - $(n-1) + (n-2) = (2n-3)$, так как оно больше наименьшего значения на количество клеток, расположенных на диагонали квадрата со стороной $(n-1)$ минус клетка, на которой стоит сама фигура.
Пусть $s$ - количество шагов размера $2$, которое требуется совершить для перехода от значения $(n-1)$ к значению $(2n-3)$. Тогда
$$ n-1 + 2s = 2n-3, $$ $$ 2s = {\color{red} {2n}} - {\color{green} 3} - {\color{red} n} + {\color{green} 1} = n - 2 \Rightarrow s = \frac{n-2}{2}. $$
Так как $n$ - чётное, $s$ $\in \mathbb{Z}$.
Однако ввиду того, что один шаг совершается между двумя разными значениями, количество различных значений валентности на единицу больше количества шагов, требующихся для перехода от минимального до максимального. В таком случае имеем $\frac{n-2}{2} + 1 = \frac{n}{2} - {\color{green} 1} +{\color{green} 1} = \frac{n}{2}.$ Итого, на доске со стороной $n$ содержится $\frac{n}{2}$ клеток с различными значениями валентности - $\frac{n}{2}$ концентрических квадратов.
В каком количестве представлено каждое из значений? Количество элементов, расположенных по периметру образованного клетками квадрата со стороной $\lambda$, равно учетверённой стороне минус четыре угловые клетки, которые оказываются учтёнными дважды. Тогда количество клеток с одноимённым значением валентности равно $4\lambda-4 = 4(\lambda-1)$, где $\lambda$ изменяется с шагом $2$ в пределах от $2$ (центральный квадрат) до $n$ (внешний).
При этом от $\lambda$ зависит не только количество значений валентности, но и их величина - она равна сумме $\lambda$ и наименьшего из значений валентности, встречающихся на доске. Таким образом, имея наименьшее значение валентности, а также количество концентрических квадратов, нетрудно составить зависимую от $\lambda$ сумму $P(X^{even}{bishop}) = \sum{}P(H_i) \cdot P(X|H_i)$. Однако удобнее суммировать по индексу, который изменяется с шагом $1$, потому заменим $k = \frac{\lambda}{2}.$ Теперь можно записать
Step6: Известная особенность ладьи - независимо от расположения на доске, она всегда контролирует постоянное количество полей, а именно $2(n-1)$ - это сумма полей по горизонтали и вертикали минус поле, на котором стоит сама ладья.
$$P(X_{rook}) = \frac{n^{2}\cdot 2(n-1)}{n^{2}(n^{2}-1)} = \frac{2}{(n+1)}.$$
Ферзь
Расположение валентностей для офицера и ферзя практически идентично, за исключением того, что наименьшее значение валентности для ферзя в три раза превышает оное для офицера.
Step7: Поскольку ферзь сочетает в себе возможности офицера и ладьи, выражение для него может быть получено как сумма выражений для этих фигур
Step8: Видно, что края доски, за исключением $3$-валентных углов, $5$-валентны, а всё оставшееся пространство $8$-валентно. Ввиду того, что краёв $4$, а $5$-валентных клеток на одном краю $(n-2)$ штук, имеем
Step9: График, отображающий зависимость вероятности от размера доски, представлен как функция действительного переменного в целях наглядности. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from string import ascii_uppercase as alphabet
def get_board(board_size):
x, y = np.meshgrid(range(board_size), range(board_size))
board = np.empty(shape=(board_size, board_size), dtype='uint8')
text_colors = np.empty_like(board, dtype='<U5')
# force left bottom corner cell to be black
if board_size % 2 == 0:
extra_term = 1
else:
extra_term = 0
for i, j in zip(x.flatten(), y.flatten()):
board[i, j] = (i + j + extra_term) % 2
# text color should be the opposite to a cell color
text_colors[i, j] = 'black' if board[i, j] else 'white'
return board, text_colors
def get_valencies(piece, board):
# Get valencies for the given piece on the given board
valencies = np.empty_like(board)
if piece == 'Pawn':
valencies = pawn(valencies)
elif piece == 'Knight':
valencies = knight(valencies)
elif piece == 'Rook':
valencies = rook(valencies)
elif piece == 'King':
valencies = king(valencies)
else:
valencies = bishop_or_queen(piece, valencies)
return valencies
def plot_board(board, text_colors, piece):
board_size = np.shape(board)[0]
x, y = np.meshgrid(range(board_size), range(board_size))
# fixed figure size
plt.figure(figsize=(2*board_size/5, 2*board_size/5))
ax = plt.subplot(111)
ax.imshow(board, cmap='gray', interpolation='none')
# Display valency (degree) values
val_board = get_valencies(piece, board)
for i, j, valency, text_col in zip(x.flatten(), y.flatten(),
val_board.flatten(),
text_colors.flatten()):
ax.text(i, j, str(valency), color=text_col,
va='center', ha='center', fontsize=12)
ax.set_xticks(np.arange(board_size+1)) # one tick per cell
ax.set_xticklabels(alphabet[:board_size]) # set letters as ticklabels
# one tick per cell
ax.set_yticks(np.arange(board_size+1))
# set numbers as ticklabels (upside down)
ax.set_yticklabels(np.arange(board_size, 0, -1))
ax.axis('tight') # get rid of the white spaces on the edges
# ax.set_title(piece, fontsize=30)
plt.show()
Explanation: О вероятности попасть под удар фигуры, поставленной случайным образом на шахматную доску
На шахматную доску случайным образом поставлены две фигуры. С какой вероятностью первая фигура бьёт вторую? В данном документе представлен расчёт этой вероятности для каждой шахматной фигуры как функции от размера доски. Рассмотрены только квадратные доски. Фигуры полагаются поставленными одновременно (обязательно стоят на разных клетках), выбор каждого из полей равновероятен.
Степень (валентность) вершины $v$ графа $G$ - количество рёбер графа $G$, инцидентных вершине $v$.
Граф ходов шахматной фигуры (далее Граф) - граф, изображающий все возможные ходы фигуры на шахматной доске - каждая вершина соответствует клетке на доске, а рёбра соответствуют возможным ходам.
Тогда валентность вершины Графа - количество полей, которые бьёт фигура, будучи поставленной на соответствующую этой вершине клетку. В целях упрощения речи далее в тексте используется формулировка "клетка имеет валентность", однако понятно, что валентность имеет не клетка, а соответствующая ей вершина в Графе.
Если событие $X$ может произойти только при выполнении одного из событий $H_1, H_2,..., H_n$, которые образуют полную группу несовместных событий, то вероятность $P(X)$ вычисляется по формуле: $$P(X) = P(H_1) \cdot P(X|H_1) + P(H_2) \cdot P(X|H_2) + ... + P(H_n) \cdot P(X|H_n),$$ которая называется формулой полной вероятности.
Пусть событие $X_{piece}$ = «Первая фигура (piece) бьёт вторую на доске размера $n\times n$», $a$ - некоторое значение валентности, $b$ - количество клеток, имеющих валентность $a$, каждая из гипотез $H_i$ = «Первая фигура стоит на клетке с валентностью $a_i$». Тогда $P(H_i) = \frac{b_i}{n^{2}}$ в силу классического определения вероятности - это отношение количества исходов, благоприятствующих событию $H_i$, к количеству всех равновозможных исходов. $P(X_{piece}|H_i) = \frac{a_i}{n^{2}-1}$ по той же причине - событию $X_{piece}$ при условии $H_i$ благоприятствует $a_i$ исходов (вторая фигура стоит под ударом клетки с валентностью $a_i$),а количество всех равновозможных исходов уменьшилось на единицу, так как одна из клеток уже занята первой фигурой.
End of explanation
def pawn(valencies):
valencies[0, :] = 0 # empty horizontal line
valencies[1:, 0] = valencies[1:, -1] = 1 # vertical edges
valencies[1:, 1:-1] = 2
return valencies
Explanation: Пешка
End of explanation
def special_cases(piece, board_sizes):
''' Plot boards of every board_size,
contained in board_sizes list for given piece.
'''
for board_size in board_sizes:
board, text_colors = get_board(board_size=board_size)
plot_board(board, text_colors, piece=piece)
special_cases(piece='Pawn', board_sizes=range(4,6))
Explanation: Рассмотрим несколько частных случаев в поисках закономерности.
End of explanation
def knight(valencies):
board_size = valencies.shape[0]
if board_size > 3:
# Four points in each corner are the same for any board size > 3.
# corner cells
valencies[0, 0] = valencies[0, -1] = \
valencies[-1, 0] = valencies[-1, -1] = 2
# cells horizontally/vertically adjacent to the corners
valencies[0, 1] = valencies[1, 0] = \
valencies[0, -2] = valencies[1, -1] = \
valencies[-2, 0] = valencies[-1, 1] = \
valencies[-2, -1] = valencies[-1, -2] = 3
# cells diagonally adjacent
valencies[1, 1] = valencies[1, -2] = \
valencies[-2, 1] = valencies[-2, -2] = 4
if board_size > 4:
valencies[0, 2:-2] = valencies[2:-2, 0] = \
valencies[2:-2, -1] = valencies[-1, 2:-2] = 4
valencies[1, 2:-2] = valencies[2:-2, 1] = \
valencies[2:-2, -2] = valencies[-2, 2:-2] = 6
valencies[2:-2, 2:-2] = 8
# Patholigical cases
elif board_size == 3:
valencies = 2 * np.ones((board_size, board_size), dtype='uint8')
valencies[1, 1] = 0
else:
valencies = np.zeros((board_size, board_size), dtype='uint8')
return valencies
special_cases(piece='Knight', board_sizes=[4,5,6])
Explanation: Закономерность очевидна - всегда присутствует горизонталь (верхняя или нижняя - в зависимости от цвета фигуры), с которой пешка не бьёт ни одну клетку - все поля этой горизонтали $0$-валентны. Их количество равно $n$.
На крайних вертикалях расположены $1$-валентные клетки, которых $2(n-1)$ штук.
Все остальные поля - $2$-валентны, и расположены они прямоугольником размера $(n-1)\times(n-2)$.
Тогда $$ P(X_{pawn}) = \frac{n\cdot 0}{n^{2}(n^{2}-1)} + \frac{2(n-1)\cdot 1}{n^{2}(n^{2}-1)} + \frac{(n-1)(n-2)\cdot 2}{n^{2}(n^{2}-1)}= \frac{2(n-1)({\color{green}1}+n-{\color{green}2})}{n^{2}(n^{2}-1)} = \frac{2(n-1)^{2}}{n^{2}(n^{2}-1)}. $$ Так как $(n^{2}-1) = (n+1)(n-1)$, $$ P(X_{pawn}) = \frac{2(n-1)}{n^{2}(n+1)}. $$
Конь
End of explanation
def bishop_or_queen(piece, valencies):
board_size = np.shape(valencies)[0]
if piece == 'Bishop':
smallest_val = board_size-1
else:
smallest_val = 3*(board_size-1)
# external square
valencies[0, :] = valencies[:, 0] = \
valencies[:, -1] = valencies[-1, :] = smallest_val
# internal sqares
for i in range (1, int(board_size/2)+1):
# top, left
# right, bottom
valencies[i, i:-i] = valencies[i:-i, i] = \
valencies[i:-i, -(i+1)] = valencies[-(i+1), i:-i] = \
smallest_val + 2*i
return valencies
special_cases(piece='Bishop', board_sizes=range(4,8))
Explanation: Количество $2$- и $3$-валентных клеток фиксировано при любом $n\geq 4$. Первые расположены в углах, а вторые прилегают к ним по вертикали и горизонтали. Стало быть, количество $2$-валентных клеток равно $4$, а $3$-валентных - $8$, вдвое больше. $4$-валентные клетки образуют арифметическую прогрессию с начальным элементом $4$ и шагом $4$ для всех $n\geq 4$ (при увеличении $n$ на единицу с каждой стороны появляется одна $4$-валентная клетка). Легко видеть, что рост количества $6$-валентных клеток устроен аналогично, однако существуют они только при $n\geq 5$. Таким образом, $4$-валентных клеток $4(n-3)$, а $6$-валентных клеток - $4(n-4)$ штук. Количество $8$-валентных клеток растёт квадратично, к тому же, они существуют только при $n\geq 5$. То есть, их количество - $(n-4)^2$. Итого имеем:
$$ P(X_{knight}) = \frac{4\cdot 2}{n^{2}(n^{2}-1)} + \frac{8\cdot 3}{n^{2}(n^{2}-1)} + \frac{4(n-3)\cdot 4}{n^{2}(n^{2}-1)} + \frac{4(n-4)\cdot 6}{n^{2}(n^{2}-1)} + \frac{(n-4)^2\cdot 8}{n^{2}(n^{2}-1)} = $$
$$ = \frac{32 + 24(n-4) + 16(n-3) + 8(n-4)^{2}}{n^{2}(n^{2}-1)} = $$
$$ = \frac{8(4+3(n-4)+2(n-3)+(n-4)^{2})}{n^{2}(n^{2}-1)} = \frac{8({\color{green} 4}+{\color{red} {3n}}-{\color{green} {12}}+{\color{red} {2n}} - {\color{green} 6}+ n^{2}-{\color{red} {8n}}+{\color{green} {16}})}{n^{2}(n^{2}-1)} = $$
$$ \frac{8(n^{2}-3n+2)}{n^{2}(n^{2}-1)} = \frac{8(n-1)(n-2)}{n^{2}(n^{2}-1)} = \frac{8(n-2)}{n^{2}(n+1)}. $$
Офицер
End of explanation
def rook(valencies):
board_size = np.shape(valencies)[0]
x, y = np.meshgrid(range(board_size), range(board_size))
for i, j in zip(x.flatten(), y.flatten()):
valencies[i, j] = 2*(board_size-1)
return valencies
special_cases(piece='Rook', board_sizes=range(4,6))
Explanation: Видно, что эквивалентные клетки располагаются по периметрам образованных ими концентрических квадратов. Поскольку при чётных $n$ в центре доски расположены $4$ поля с максимальной валентностью, а при нечётных - одно, случаи чётных и нечётных $n$ представляется удобным рассмотреть раздельно.
Чётные $n$
Каково количество различных значений валентности, а также их величина? Наименьшее значение равно $(n-1)$, так как это количество клеток на диагонали минус клетка, на которой стоит сама фигура. Наибольшее значение - $(n-1) + (n-2) = (2n-3)$, так как оно больше наименьшего значения на количество клеток, расположенных на диагонали квадрата со стороной $(n-1)$ минус клетка, на которой стоит сама фигура.
Пусть $s$ - количество шагов размера $2$, которое требуется совершить для перехода от значения $(n-1)$ к значению $(2n-3)$. Тогда
$$ n-1 + 2s = 2n-3, $$ $$ 2s = {\color{red} {2n}} - {\color{green} 3} - {\color{red} n} + {\color{green} 1} = n - 2 \Rightarrow s = \frac{n-2}{2}. $$
Так как $n$ - чётное, $s$ $\in \mathbb{Z}$.
Однако ввиду того, что один шаг совершается между двумя разными значениями, количество различных значений валентности на единицу больше количества шагов, требующихся для перехода от минимального до максимального. В таком случае имеем $\frac{n-2}{2} + 1 = \frac{n}{2} - {\color{green} 1} +{\color{green} 1} = \frac{n}{2}.$ Итого, на доске со стороной $n$ содержится $\frac{n}{2}$ клеток с различными значениями валентности - $\frac{n}{2}$ концентрических квадратов.
В каком количестве представлено каждое из значений? Количество элементов, расположенных по периметру образованного клетками квадрата со стороной $\lambda$, равно учетверённой стороне минус четыре угловые клетки, которые оказываются учтёнными дважды. Тогда количество клеток с одноимённым значением валентности равно $4\lambda-4 = 4(\lambda-1)$, где $\lambda$ изменяется с шагом $2$ в пределах от $2$ (центральный квадрат) до $n$ (внешний).
При этом от $\lambda$ зависит не только количество значений валентности, но и их величина - она равна сумме $\lambda$ и наименьшего из значений валентности, встречающихся на доске. Таким образом, имея наименьшее значение валентности, а также количество концентрических квадратов, нетрудно составить зависимую от $\lambda$ сумму $P(X^{even}{bishop}) = \sum{}P(H_i) \cdot P(X|H_i)$. Однако удобнее суммировать по индексу, который изменяется с шагом $1$, потому заменим $k = \frac{\lambda}{2}.$ Теперь можно записать:
$$ P(X^{even}{bishop}) = \sum{k = 1}^{\frac{n}{2}} \frac{4(n+1-2k)\cdot(n-3+2k)} {n^{2}(n^{2}-1)} = \frac{4}{n^{2}(n^{2}-1)} \sum_{k = 1}^{\frac{n}{2}} n^{2} - {\color{red} {3n}} + {\color{blue} {2kn}} + {\color{red} {n}} - 3 + {\color{cyan} {2k}} - {\color{blue} {2kn}} + {\color{cyan} {6k}} - 4k^{2} = $$
$$ =\frac{4}{n^{2}(n^{2}-1)} \sum_{k = 1}^{\frac{n}{2}} n^{2} - 2n - 3 + 8k - 4k^{2}. $$
Вынесем первые три слагаемых за знак суммы, так как они не зависят от $k$, умножив их на $\frac{n}{2}$ - количество раз, которое они встречаются в сумме:
$$ P(X^{even}{bishop}) = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n}{2}(n^{2} - 2n - 3) + \sum{k = 1}^{\frac{n}{2}}8k - 4k^{2}] $$
Рассмотрим отдельно выражение под знаком суммы.
$$ \sum_{k = 1}^{\frac{n}{2}}8k - 4k^{2} = 8\sum_{k = 1}^{\frac{n}{2}}k - 4\sum_{k = 1}^{\frac{n}{2}} k^{2}. $$
Обозначим $S_1 = 8\sum_{k = 1}^{\frac{n}{2}}k$, $S_2 = 4\sum_{k = 1}^{\frac{n}{2}} k^{2}.$
$S_1$ - это умноженная на $8$ сумма первых $\frac{n}{2}$ натуральных чисел, которая есть сумма первых $\frac{n}{2}$ членов арифметической прогрессии, поэтому
$$ S_1 = 8\frac{\frac{n}{2}(\frac{n}{2}+1)}{2} = 4\frac{n}{2}(\frac{n}{2}+1) = 2n(\frac{n}{2}+1) = \frac{2n^2}{2}+2n = n^2 + 2n = n(n+2). $$
$S_2$ - это умноженная на 4 сумма квадратов первых $\frac{n}{2}$ натуральных чисел, поэтому
$$ S_2 = 4\frac{\frac{n}{2}(\frac{n}{2}+1)(2\frac{n}{2}+1)}{6} = \frac{n(n+2)(n+1)}{6}. $$
$$ S_1 - S_2 = n(n+2) - \frac{n(n+2)(n+1)}{6} = n(n+2) (1 - \frac{(n+1)}{6}) = $$
$$ = \frac{n(n+2)({\color{green} 6}-n-{\color{green} 1})}{6} = \frac{n(n+2)(-n + 5)}{6} = -\frac{n(n+2)(n-5)}{6}.$$
Тогда
$$ P(X^{even}_{bishop}) = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n}{2}(n^{2} - 2n - 3) - \frac{n(n+2)(n-5)}{6} ] = \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n(3n^{2} - 6n - 9)}{6} - \frac{n(n+2)(n-5)}{6} ] = $$
$$ = \frac{4n}{6n^{2}(n^{2}-1)}({\color{orange} {3n^{2}}} - {\color{red} {6n}} - {\color{green} 9} - {\color{orange} {n^2}} + {\color{red} {5n}} - {\color{red} {2n}} + {\color{green} {10}}) = \frac{2}{3n(n^{2}-1)}(2n^2 - 3n + 1) = \frac{2(2n-1)(n-1)}{3n(n^{2}-1)} = \frac{2(2n-1)}{3n(n+1)}. $$
Нечётные $n$
Каково количество различных значений валентности? Наименьшее значение равно $(n-1)$ из тех же рассуждений, что и для чётных $n$. Наибольшее значение, очевидно, равно удвоенному наименьшему - $(n-1) + (n-1) = 2(n-1)$.
Пусть $s$ - количество шагов размера $2$, которое требуется совершить для перехода от значения $(n-1)$ к значению $2(n-1)$. Тогда
$$n-1 + 2s = 2n-2,$$ $$2s = {\color{red} {2n}} - {\color{green} 2} - {\color{red} n} + {\color{green} 1} = n - 1 \Rightarrow s = \frac{n-1}{2}.$$
Так как $n$ - нечётное, $s$ $\in \mathbb{Z}$. Итого имеем $\frac{n-1}{2} + 1 = \frac{n}{2} - {\color{green} {\frac{1}{2}}} +{\color{green} 1} = \frac{n}{2} + \frac{1}{2} = \frac{n+1}{2}$ клеток с различными значениями валентности.
В каком количестве представлено каждое из значений? Рассуждения для чётных и нечётных $n$ идентичны, за исключением того, что выражение $4(\lambda-1)$ равно нулю при $\lambda = 1$ (в центральной клетке доски). По этой причине слагаемое $P(H_{\frac{n+1}{2}}) \cdot P(X|H_{\frac{n+1}{2}})$ должно быть вынесено за знак общей суммы, а индекс суммирования будет принимать на единицу меньше значений: $\frac{n+1}{2} - 1 = \frac{n}{2} + \frac{1}{2} - 1 = \frac{n}{2} + {\color{green} {\frac{1}{2}}} - {\color{green} 1} = \frac{n}{2} - \frac{1}{2} = \frac{n-1}{2}.$
Тогда можно записать:
$$ P(X^{odd}{bishop}) = \frac{1\cdot 2(n-1)}{n^{2}(n^{2}-1)} + \sum{k = 1}^{\frac{n-1}{2}} \frac{4(n+1-2k)\cdot(n-3+2k)} {n^{2}(n^{2}-1)}. $$
Легко видеть, что выражение под знаком суммы отличается от $P(X^{even}{bishop})$ только верхней границей суммирования. Следовательно, аналогично предыдущим выкладкам можно обозначить: $S_1 = 8\sum{k = 1}^{\frac{n-1}{2}}k$, $S_2 = 4\sum_{k = 1}^{\frac{n-1}{2}} k^{2}.$
$$ S_1 = 8\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)}{2} = 4\frac{n-1}{2}(\frac{n+1}{2}) = (n-1)(n+1). $$
$$ S_2 = 4\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)(2\frac{n-1}{2}+1)}{6} = 4\frac{\frac{n-1}{2}(\frac{n-1}{2}+1)(2\frac{n-1}{2}+1)}{6} = \frac{(n-1)(\frac{n+1}{2})n}{3} = \frac{(n-1)(n+1)n}{6}. $$
$$ S_1 - S_2 = (n-1)(n+1) - \frac{(n-1)(n+1)n}{6} = (n-1)(n+1)(1 - \frac{n}{6}) = $$
$$ = \frac{(n-1)(n+1)(6 - n)}{6} = -\frac{(n-1)(n+1)(n-6)}{6}. $$
Тогда
$$ P(X^{odd}_{bishop}) = \frac{2(n-1)}{n^{2}(n^{2}-1)} + \frac{4}{n^{2}(n^{2}-1)}\ [\frac{n-1}{2}(n^{2} - 2n - 3) -\frac{(n-1)(n+1)(n-6)}{6}] = $$
$$ = \frac{2}{n^{2}(n+1)} + \frac{4(n-1)}{n^{2}(n^{2}-1)} [\frac{3n^2 - 6n - 9}{6} -\frac{(n+1)(n-6)}{6}] = $$
$$ = \frac{2}{n^{2}(n+1)} + \frac{4}{6n^{2}(n+1)}({\color{orange} {3n^2}} - {\color{red} {6n}} - {\color{green} 9} - {\color{orange} {n^2}} + {\color{red} {6n}} - {\color{red} n} + {\color{green} 6}) = $$
$$ = \frac{2}{n^{2}(n+1)} + \frac{4}{6n^{2}(n+1)}(2n^2 - n - 3) = \frac{{\color{green} {12}} + 8n^2 - 4n - {\color{green} {12}}}{6n^{2}(n+1)} = \frac{4n(2n-1)}{6n^{2}(n+1)} = \frac{2(2n-1)}{3n(n+1)}. $$
Как видно, чётность доски не влияет на рассматриваемую вероятность: $P(X^{even}{bishop}) = P(X^{odd}{bishop}) = P(X_{bishop}) = \frac{2(2n-1)}{3n(n+1)}$.
Ладья
End of explanation
special_cases(piece='Queen', board_sizes=range(4,8))
Explanation: Известная особенность ладьи - независимо от расположения на доске, она всегда контролирует постоянное количество полей, а именно $2(n-1)$ - это сумма полей по горизонтали и вертикали минус поле, на котором стоит сама ладья.
$$P(X_{rook}) = \frac{n^{2}\cdot 2(n-1)}{n^{2}(n^{2}-1)} = \frac{2}{(n+1)}.$$
Ферзь
Расположение валентностей для офицера и ферзя практически идентично, за исключением того, что наименьшее значение валентности для ферзя в три раза превышает оное для офицера.
End of explanation
def king(valencies):
# corners : top left = top right = \
# bottom left = bottom right
valencies[0, 0] = valencies[0, -1] = \
valencies[-1, 0] = valencies[-1, -1] = 3
# edges : top, left, right, bottom
valencies[0, 1:-1] = valencies[1:-1, 0] = \
valencies[1:-1, -1] = valencies[-1, 1:-1] = 5
# center
valencies[1:-1, 1:-1] = 8
return valencies
special_cases(piece='King', board_sizes=range(4,6))
Explanation: Поскольку ферзь сочетает в себе возможности офицера и ладьи, выражение для него может быть получено как сумма выражений для этих фигур:
$$ P(X_{queen}) = \frac{2(2n-1)}{3n(n+1)} + \frac{2}{n+1} = \frac{2(2n-1) + 6n}{3n(n+1)} = \frac{{\color{red} {4n}} - 2 + {\color{red} {6n}}}{3n(n+1)} = \frac{10n - 2}{3n(n+1)} = \frac{2(5n-1)}{3n(n+1)}. $$
Король
End of explanation
def get_probabilities(piece, n):
# NOTE: Results can be wrong for large n because of dividing by
# the huge denominator!
if piece == 'Pawn':
return 2*(n-1)/((n**2)*(n+1))
elif piece == 'Knight':
return 8*(n-2)/((n**2)*(n+1))
elif piece == 'Bishop':
return 2*(2*n-1)/(3*n*(n+1))
elif piece == 'Rook':
return 2/(n+1)
elif piece == 'Queen':
return 2*(5*n-1)/(3*n*(n+1))
elif piece == 'King':
return 4*(2*n-1)/(n**2*(n+1))
def straightforward_prob(piece, board_size):
# Get probability directly from the board of valencies
board, _ = get_board(board_size)
val_board = get_valencies(piece, board)
unique, counts = np.unique(val_board, return_counts=True)
prob = np.dot(unique, counts)/((board_size)**2 * (board_size**2 - 1))
return prob
Explanation: Видно, что края доски, за исключением $3$-валентных углов, $5$-валентны, а всё оставшееся пространство $8$-валентно. Ввиду того, что краёв $4$, а $5$-валентных клеток на одном краю $(n-2)$ штук, имеем:
$$ P(X_{king}) = \frac{4\cdot 3}{n^{2}(n^{2}-1)} +\frac{4(n-2)\cdot 5}{n^{2}(n^{2}-1)} +\frac{(n-2)^2\cdot 8}{n^{2}(n^{2}-1)} = $$
$$ = \frac{12 + 20(n-2) + 8(n-2)^2}{n^{2}(n^{2}-1)} = \frac{4(3 + 5(n-2)+2(n-2)^2)}{n^{2}(n^{2}-1)} = $$
$$ = \frac{4(3 + 5n-10+2(n^2 - 4n + 4))}{n^{2}(n^{2}-1)} = \frac{4({\color{green} 3} + {\color{red} {5n}}-{\color{green} {10}}+2n^2 - {\color{red} {8n}} + {\color{green} {8}} )}{n^{2}(n^{2}-1)} = $$
$$ = \frac{4(2n^2 - 3n + 1)}{n^{2}(n^{2}-1)} = \frac{4(2n-1)(n-1)}{n^{2}(n^{2}-1)} = \frac{4(2n-1)}{n^{2}(n+1)}. $$
End of explanation
start = 2
end = 16
step = 0.02
x = np.arange(start, end)
names_list = ['Pawn', 'Knight', 'Bishop', 'Rook', 'Queen', 'King']
# Check if analytical results match straightforward calculations
for name in names_list:
for board_size in x:
y = get_probabilities(name, board_size)
if not y == straightforward_prob(name, board_size):
print('Mistake in equation for %s' % name)
# print('Analytical results approved')
# Let's expand the range from Z to R for the sake of visual clarity
x = np.arange(start, end, step)
fig, ax = plt.subplots(figsize=(8, 5))
for name in names_list:
y = get_probabilities(name, x)
plt.plot(x, y, label=name, linewidth=3.0)
legend = plt.legend(loc='upper right')
for label in legend.get_lines():
label.set_linewidth(3)
for label in legend.get_texts():
label.set_fontsize(20)
plt.xlabel("Board size", fontsize=20)
plt.ylabel("Probability", fontsize=20)
plt.show()
Explanation: График, отображающий зависимость вероятности от размера доски, представлен как функция действительного переменного в целях наглядности.
End of explanation |
14,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Share the Insight
There are two main insights we want to communicate.
- Bangalore is the largest market for Onion Arrivals.
- Onion Price variation has increased in the recent years.
Let us explore how we can communicate these insight visually.
Preprocessing to get the data
Step1: Let us plot the Cities in a Geographic Map
Step2: PRINCIPLE
Step3: We can do a crude aspect ratio adjustment to make the cartesian coordinate systesm appear like a mercator map | Python Code:
# Import the library we need, which is Pandas and Matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Set some parameters to get good visuals - style to ggplot and size to 15,10
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 10)
# Read the csv file of Monthwise Quantity and Price csv file we have.
df = pd.read_csv('MonthWiseMarketArrivals_clean.csv')
# Change the index to the date column
df.index = pd.PeriodIndex(df.date, freq='M')
# Sort the data frame by date
df = df.sort_values(by = "date")
# Get the data for year 2015
df2015 = df[df.year == 2015]
# Groupby on City to get the sum of quantity
df2015City = df2015.groupby(['city'], as_index=False)['quantity'].sum()
df2015City = df2015City.sort_values(by = "quantity", ascending = False)
df2015City.head()
Explanation: Share the Insight
There are two main insights we want to communicate.
- Bangalore is the largest market for Onion Arrivals.
- Onion Price variation has increased in the recent years.
Let us explore how we can communicate these insight visually.
Preprocessing to get the data
End of explanation
# Load the geocode file
dfGeo = pd.read_csv('city_geocode.csv')
dfGeo.head()
Explanation: Let us plot the Cities in a Geographic Map
End of explanation
dfCityGeo = pd.merge(df2015City, dfGeo, how='left', on=['city', 'city'])
dfCityGeo.head()
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100)
Explanation: PRINCIPLE: Joining two data frames
There will be many cases in which your data is in two different dataframe and you would like to merge them in to one dataframe. Let us look at one example of this - which is called left join
End of explanation
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100, figsize = [10,11])
# Let us at quanitity as the size of the bubble
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity, figsize = [10,11])
# Let us scale down the quantity variable
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, figsize = [10,11])
# Reduce the opacity of the color, so that we can see overlapping values
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000, alpha = 0.5, figsize = [10,11])
Explanation: We can do a crude aspect ratio adjustment to make the cartesian coordinate systesm appear like a mercator map
End of explanation |
14,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Global Surface Temperature
This example uses historical data since 1880 on average global surface temperature changes from NASA's GISS Surface Temperature Analysis (GISTEMP) (original file) That site has lots of other data sets in csv format, too.
Click the "play" icon or press shift+enter to execute each cell.
Step1: Importing a data set
This next cell may take a little while to run if it's grabbing a pretty big data set. The cell label to the left will look like "In [*]" while it's still thinking and "In [2]" when it's finished.
Step2: We can view the first few rows of the file we just imported.
Step3: Plotting the data
Step4: Edit and re-plot
If you like Randall Monroe's webcomic XKCD as much as I do, you can make your plots look like his hand-drawn ones. Thanks to Jake VanderPlas for sorting that out. | Python Code:
# import the software packages needed
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
inline_rc = dict(mpl.rcParams)
Explanation: Global Surface Temperature
This example uses historical data since 1880 on average global surface temperature changes from NASA's GISS Surface Temperature Analysis (GISTEMP) (original file) That site has lots of other data sets in csv format, too.
Click the "play" icon or press shift+enter to execute each cell.
End of explanation
# Combined land and ocean temperature averages (LOTI: Land Ocean Temperature Index)
data1 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LOTI.csv', header=1).replace(to_replace="***", value=np.NaN)
data_LOTI = data1.apply(lambda x: pd.to_numeric(x, errors='ignore'))
# Only land temperature averages
data2 = pd.read_csv('http://github.com/adamlamee/CODINGinK12-data/raw/master/LAND.csv', header=1).replace(to_replace="***", value=np.NaN)
data_LAND = data2.apply(lambda x: pd.to_numeric(x, errors='ignore'))
Explanation: Importing a data set
This next cell may take a little while to run if it's grabbing a pretty big data set. The cell label to the left will look like "In [*]" while it's still thinking and "In [2]" when it's finished.
End of explanation
# The .head(n) command displays the first n rows of the file.
data_LAND.head(5)
Explanation: We can view the first few rows of the file we just imported.
End of explanation
x1 = data_LOTI.Year
y1 = data_LOTI.JanDec
# plt.plot() makes a line graph, by default
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1)
plt.title('Average land an docean temperature readings')
plt.xlabel('Year')
plt.ylabel('Percent temp change')
x2 = data_LAND.Year
y2 = data_LAND.JanDec
# plt.plot() makes a line graph, by default
fig = plt.figure(figsize=(10, 5))
plt.plot(x2, y2)
plt.title('Land temperature readings')
plt.xlabel('Year')
plt.ylabel('Percent temp change')
# Wow, this needs a title and axis labels!
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1, label="Land and Ocean")
plt.plot(x2, y2, label="Land only")
plt.legend()
plt.show()
Explanation: Plotting the data
End of explanation
plt.xkcd()
fig = plt.figure(figsize=(10, 5))
plt.plot(x1, y1)
# to make normal plots again
mpl.rcParams.update(inline_rc)
Explanation: Edit and re-plot
If you like Randall Monroe's webcomic XKCD as much as I do, you can make your plots look like his hand-drawn ones. Thanks to Jake VanderPlas for sorting that out.
End of explanation |
14,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In part 1, we ourselves created the additional features (x^2, x^3). Wouldn't it be nice if we create activation functions to do just that and let the neural network decide the weights for connections during training.
Step1: Time to create our x^n activation function
Step2: Unnormalized features
Step3: Normalized features
After normalizing the features, SGD is not converging! what? and there was no performance advantage compared to unnormalized features. | Python Code:
import torch
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Setup the training and test tensors
# Let's generate 400 examples
N = 400
x = np.random.uniform(low=-75, high=100, size=N)
y = 2*x
X_tensor = Variable(torch.FloatTensor(x), requires_grad=False)
y_tensor = Variable(torch.FloatTensor(y), requires_grad=False)
# Test set initialization
x_test = np.array([-2.5, 0.0, 19])
X_test_tsr = Variable(torch.FloatTensor(x_test), requires_grad=False)
# Normalized features
X_min = torch.min(X_tensor)
X_max = torch.max(X_tensor)
X_mean = torch.mean(X_tensor)
X_sub_mean = X_tensor-X_mean.expand_as(X_tensor)
X_max_min = X_max-X_min + 1e-7
X_norm_tsr = X_sub_mean/X_max_min.expand_as(X_sub_mean)
X_test_sub_mean = X_test_tsr-X_mean.expand_as(X_test_tsr)
X_test_norm_tsr = X_test_sub_mean/X_max_min.expand_as(X_test_sub_mean)
# Implement version-2 neural network
import math
from time import time
from collections import OrderedDict
def RunV2NNTraining(X, y, model, learning_rate=1e-5, epochs=5000, batch_size=None, X_test=None,
use_optimizer=None, adam_betas=(0.9, 0.999)):
# Neural Net
X_size = X.size()
N = X_size[0]
loss_fn = torch.nn.MSELoss(size_average=True)
# Choose Optimizer
optimizer = None
if use_optimizer:
if use_optimizer == 'SGD':
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
elif use_optimizer == 'Adam':
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate, betas=adam_betas)
elif use_optimizer == 'Adadelta':
optimizer = torch.optim.Adadelta(model.parameters(), lr=learning_rate)
elif use_optimizer == 'ASGD':
optimizer = torch.optim.ASGD(model.parameters(), lr=learning_rate)
elif use_optimizer == 'RMSprop':
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate)
elif use_optimizer == 'Adagrad':
optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)
else:
print("Invalid Optimizer")
use_optimizer=None
losses = []
loss = None
start_time = time()
for t in range(epochs):
num_batches = 1
X_batch = None
y_batch = None
if batch_size:
num_batches = math.ceil(N/batch_size)
else:
batch_size = N
shuffle = torch.randperm(N)
for b in range(num_batches):
lower_index = b*batch_size
upper_index = min(lower_index+batch_size, N)
indices = shuffle[lower_index:upper_index]
X_batch = X[indices]
y_batch = y[indices]
y_pred = model(X_batch)
loss = loss_fn(y_pred, y_batch)
if use_optimizer:
optimizer.zero_grad()
loss.backward()
optimizer.step()
else:
# Zero the gradients before running the backward pass.
model.zero_grad()
loss.backward()
# Update the weights using gradient descent. Each parameter is a Variable, so
# we can access its data and gradients like we did before.
for param in model.parameters():
param.data -= learning_rate * param.grad.data
losses.append(loss.data[0])
end_time = time()
time_taken = end_time - start_time
print("Time Taken = %.2f seconds " % time_taken)
print("Final Loss: ", loss.data[0])
print("Parameters [w_1, w_2, w_3, b]: ")
for name, param in model.named_parameters():
print(name)
print(param.data)
# plot Loss vs Iterations
plt.plot(losses)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.show()
# Predictions on Test set
if X_test:
print("Test:")
print("X_test: ", X_test.data)
print("y_pred: ", model(X_test))
def GetV2NNLoss(X, y, model):
loss_fn = torch.nn.MSELoss(size_average=True)
y_pred = model(X)
loss = loss_fn(y_pred, y)
return loss.data[0]
Explanation: In part 1, we ourselves created the additional features (x^2, x^3). Wouldn't it be nice if we create activation functions to do just that and let the neural network decide the weights for connections during training.
End of explanation
class PowerNet(torch.nn.Module):
def __init__(self, n):
super(PowerNet, self).__init__()
self.n = n
self.linear = torch.nn.Linear(1, 1)
def forward(self, x):
return self.linear(x).pow(self.n)
Pow123Net_Mask = Variable(torch.FloatTensor([1.0,1.0,1.0]), requires_grad=False)
class Pow123Net(torch.nn.Module):
def __init__(self):
super(Pow123Net, self).__init__()
self.p1 = PowerNet(1)
self.p2 = PowerNet(2)
self.p3 = PowerNet(3)
def forward(self, x):
x1 = self.p1.forward(x)
x2 = self.p2.forward(x)
x3 = self.p3.forward(x)
xc = torch.cat((x1, x2, x3), 1)
return xc*Pow123Net_Mask.expand_as(xc)
Explanation: Time to create our x^n activation function
End of explanation
# use_optimizer can be Adam, RMSprop, Adadelta, ASGD, SGD, Adagrad
model = torch.nn.Sequential(OrderedDict([
("Pow123Net", Pow123Net()),
("FC", torch.nn.Linear(3, 1))]
))
RunV2NNTraining(X=X_tensor.view(-1,1), y=y_tensor, model=model, batch_size=None, epochs=25000, learning_rate=5e-3,
X_test=X_test_tsr.view(-1,1), use_optimizer='Adam')
# Now, how do we find the equation?
# One way to find is to see the effect of each activation on the loss
print("Final Loss: ", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))
# mask f(x) = x
Pow123Net_Mask[0] = 0.0
print("Loss with x masked: ", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))
Pow123Net_Mask[0] = 1.0
# mask f(x) = x^2
Pow123Net_Mask[1] = 0.0
print("Loss with x^2 masked: ", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))
Pow123Net_Mask[1] = 1.0
# mask f(x) = x^3
Pow123Net_Mask[2] = 0.0
print("Loss with x^3 masked: ", GetV2NNLoss(X=X_tensor.view(-1,1), y=y_tensor, model=model))
Pow123Net_Mask[2] = 1.0
# Clearly activations X^2 and X^3 are not important
# Now what is the final equation?
p1_w = None
p1_b = None
fc1_w = None
fc_b = None
for name, param in model.named_parameters():
if name == 'Pow123Net.p1.linear.weight':
p1_w = param.data[0]
if name == 'Pow123Net.p1.linear.bias':
p1_b = param.data[0]
if name == 'FC.weight':
fc1_w = param.data[0,0]
if name == 'FC.bias':
fc_b = param.data[0]
coeff_x = p1_w*fc1_w
const = p1_b*fc1_w+fc_b
print("Finally the equation is y = ",coeff_x[0],"*x + ", const)
print("Pretty close to y = 2*x")
Explanation: Unnormalized features
End of explanation
model = torch.nn.Sequential(OrderedDict([
("Pow123Net", Pow123Net()),
("FC", torch.nn.Linear(3, 1))]
))
RunV2NNTraining(X=X_norm_tsr.view(-1,1), y=y_tensor, model=model, batch_size=None, epochs=25000, learning_rate=1e-1,
X_test=X_test_norm_tsr.view(-1,1), use_optimizer='Adam')
Explanation: Normalized features
After normalizing the features, SGD is not converging! what? and there was no performance advantage compared to unnormalized features.
End of explanation |
14,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Lattice を使った形状制約
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 必要なパッケージをインポートします。
Step3: このガイドで使用されるデフォルト値です。
Step4: レストランのランク付けに使用するトレーニングデータセット
ユーザーがレストランの検索結果をクリックするかどうかを判定する、単純なシナリオを想定しましょう。このタスクでは、次の特定の入力特徴量でクリック率(CTR)を予測します。
平均評価(avg_rating)
Step6: この CTR 関数の等高線図を見てみましょう。
Step7: データを準備する
次に、合成データセットを作成する必要があります。シミュレーション済みのレストランのデータセットとその特徴量を生成するところから始めます。
Step8: トレーニング、評価、およびテストデータセットを生成しましょう。検索結果でレストランが閲覧されるときに、ユーザーのエンゲージメント(クリック有りまたはクリック無し)をサンプルポイントとして記録できます。
実際には、ユーザーが全検索結果を見ることはほとんどありません。つまり、ユーザーは、使用されている現在のランキングモデルによってすでに「良い」とみなされているレストランのみを閲覧する傾向にあるでしょう。そのため、トレーニングデータセットでは「良い」レストランはより頻繁に表示されて、過剰表現されます。さらに多くの特徴量を使用する際に、トレーニングデータセットでは、特徴量空間の「悪い」部分に大きなギャップが生じてしまいます。
モデルがランキングに使用される場合、トレーニングデータセットで十分に表現されていないより均一な分布を持つ、すべての関連結果で評価されることがほとんどです。この場合、過剰に表現されたデータポイントの過適合によって一般化可能性に欠けることから、柔軟で複雑なモデルは失敗する可能性があります。この問題には、トレーニングデータセットから形状制約を拾えない場合に合理的な予測を立てられるようにモデルを誘導する形状制約を追加するドメインナレッジを適用して対処します。
この例では、トレーニングデータセットは、人気のある良いレストランとのユーザーインタラクションで構成されており、テストデータセットには、上記で説明した評価設定をシミュレーションする一様分布があります。このようなテストデータセットは、実際の問題設定では利用できないことに注意してください。
Step9: トレーニングと評価に使用する input_fn を定義します。
Step10: 勾配ブースティング木を適合させる
まずは、avg_rating と num_reviews の 2 つの特徴量から始めましょう。
検証とテストのメトリックを描画および計算する補助関数をいくつか作成します。
Step11: データセットに TensorFlow 勾配ブースティング決定木を適合できます。
Step12: モデルは本来の CTR の一般的な形状をキャプチャし、まともな検証メトリックを使用していますが、入力空間のいくつかの部分に直感に反する振る舞いがあります。推定される CTR は平均評価またはレビュー数が増加するにつれ降下しているところです。これは、トレーニングデータセットがうまくカバーしていない領域のサンプルポイントが不足しているためです。単に、モデルにはデータのみから正しい振る舞いを推測する術がないのです。
この問題を解決するには、モデルが平均評価とレビュー数の両方に対して単調的に増加する値を出力しなければならないように、形状制約を強制します。TFL にこれを実装する方法は、後で説明します。
DNN を適合させる
DNN 分類器で、同じ手順を繰り返すことができます。レビュー数が少なく、十分なサンプルポイントがないため、同様の、意味をなさない外挿パターンとなります。検証メトリックが木のソリューションより優れていても、テストメトリックが悪化するところに注意してください。
Step13: 形状制約
TensorFlow Lattice(TFL)の焦点は、トレーニングデータを超えてモデルの振る舞いを守るために形状制約を強制することに当てられます。形状制約は TFL Keras レイヤーに適用されます。その詳細は、TensorFlow の JMLR 論文をご覧ください。
このチュートリアルでは、TF 缶詰 Estimator を使用してさまざまな形状制約を説明しますが、手順はすべて、TFL Keras レイヤーから作成されたモデルで実行することができます。
ほかの TensorFlow Estimator と同様に、TFL 缶詰 Estimator では、特徴量カラムを使用して入力形式を定義し、トレーニングの input_fn を使用してデータを渡します。TFL 缶詰 Estimator を使用するには、次の項目も必要です。
モデルの構成
Step14: CalibratedLatticeConfig を使用して、各入力にキャリブレータを適用(数値特徴量のピース単位の線形関数)してから格子レイヤーを適用して非線形的に較正済みの特徴量を融合する缶詰分類器を作成します。モデルの視覚化には、tfl.visualization を使用できます。特に、次のプロットは、缶詰分類器に含まれるトレーニング済みのキャリブレータを示します。
Step15: 制約が追加されると、推定される CTR は平均評価またはレビュー数が増加するにつれて、必ず増加するようになります。これは、キャリブレータと格子を確実に単調にすることで行われます。
収穫逓減
収穫逓減とは、特定の特徴量値を増加すると、それを高める上で得る限界利益は減少することを意味します。このケースでは、num_reviews 特徴量はこのパターンに従うと予測されるため、それに合わせてキャリブレータを構成することができます。収穫逓減を次の 2 つの十分な条件に分けることができます。
キャリブレータが単調的に増加している
キャリブレータが凹状である
Step16: テストメトリックが、凹状の制約を追加することで改善しているのがわかります。予測図もグラウンドトゥルースにより似通っています。
2D 形状制約
Step17: 次の図は、トレーニング済みの格子関数を示します。信頼制約により、較正済みの num_reviews のより大きな値によって、較正済みの avg_rating に対してより高い勾配が強制され、格子出力により大きな変化が生じることが期待されます。
Step18: キャリブレータを平滑化する
では、avg_rating のキャリブレータを見てみましょう。単調的に上昇してはいますが、勾配の変化は突然起こっており、解釈が困難です。そのため、regularizer_configs にレギュラライザーをセットアップして、このキャリブレータを平滑化したいと思います。
ここでは、反りの変化を縮減するために wrinkle レギュラライザを適用します。また、laplacian レギュラライザを使用してキャリブレータを平らにし、hessian レギュラライザを使用してより線形にします。
Step19: キャリブレータがスムーズになり、全体的な推定 CTR がグラウンドトゥルースにより一致するように改善されました。これは、テストメトリックと等高線図の両方に反映されます。
分類較正の部分単調性
これまで、モデルには 2 つの数値特徴量のみを使用してきました。ここでは、分類較正レイヤーを使用した 3 つ目の特徴量を追加します。もう一度、描画とメトリック計算用のヘルパー関数のセットアップから始めます。
Step20: 3 つ目の特徴量である dollar_rating を追加するには、TFL での分類特徴量の取り扱いは、特徴量カラムと特徴量構成の両方においてわずかに異なることを思い出してください。ここでは、ほかのすべての特徴量が固定されている場合に、"DD" レストランの出力が "D" よりも大きくなるように、部分単調性を強制します。これは、特徴量構成の monotonicity 設定を使用して行います。
Step21: この分類キャリブレータは、DD > D > DDD > DDDD というモデル出力の優先を示します。このセットアップではこれらは定数です。欠落する値のカラムもあることに注意してください。このチュートリアルのトレーニングデータとテストデータには欠落した特徴量はありませんが、ダウンストリームでモデルが使用される場合に値の欠落が生じたときには、モデルは欠損値の帰属を提供します。
ここでは、dollar_rating で条件付けされたモデルの予測 CTR も描画します。必要なすべての制約が各スライスで満たされているところに注意してください。
出力較正
ここまでトレーニングしてきたすべての TFL モデルでは、格子レイヤー(モデルグラフで "Lattice" と示される部分)はモデル予測を直接出力しますが、格子出力をスケーリングし直してモデル出力を送信すべきかわからないことがたまにあります。
特徴量が $log$ カウントでラベルがカウントである。
格子は頂点をほとんど使用しないように構成されているが、ラベル分布は比較的複雑である。
こういった場合には、格子出力とモデル出力の間に別のキャリブレータを追加して、モデルの柔軟性を高めることができます。ここでは、今作成したモデルにキーポイントを 5 つ使用したキャリブレータレイヤーを追加することにしましょう。また、出力キャリブレータのレギュラライザも追加して、関数の平滑性を維持します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice
Explanation: TensorFlow Lattice を使った形状制約
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このチュートリアルは、TensorFlow Lattice(TFL)ライブラリが提供する制約と正規化の概要です。ここでは、合成データセットに TFL 缶詰 Estimator を使用しますが、このチュートリアルの内容は TFL Keras レイヤーから構築されたモデルでも実行できます。
続行する前に、ランタイムに必要なすべてのパッケージがインストールされていることを確認してください(以下のコードセルでインポートされるとおりに行います)。
セットアップ
TF Lattice パッケージをインストールします。
End of explanation
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 必要なパッケージをインポートします。
End of explanation
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
Explanation: このガイドで使用されるデフォルト値です。
End of explanation
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
Explanation: レストランのランク付けに使用するトレーニングデータセット
ユーザーがレストランの検索結果をクリックするかどうかを判定する、単純なシナリオを想定しましょう。このタスクでは、次の特定の入力特徴量でクリック率(CTR)を予測します。
平均評価(avg_rating): [1,5] の範囲の値による数値特徴量。
レビュー数(num_reviews): 最大値 200 の数値特徴量。流行状況の測定値として使用します。
ドル記号評価(dollar_rating): {"D", "DD", "DDD", "DDDD"} セットの文字列値による分類特徴量。
ここでは、真の CTR を式 $$ CTR = 1 / (1 + exp{\mbox{b(dollar_rating)}-\mbox{avg_rating}\times log(\mbox{num_reviews}) /4 }) $$ で得る合成データセットを作成します。$b(\cdot)$ は各 dollar_rating をベースラインの値 $$ \mbox{D}\to 3,\ \mbox{DD}\to 2,\ \mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5. $$ に変換します。
この式は、典型的なユーザーパターンを反映します。たとえば、ほかのすべてが固定された状態で、ユーザーは星評価の高いレストランを好み、"$$" のレストランは "$" のレストランよりも多いクリック率を得、"$$$"、"$$$$" となればさらに多いクリック率を得るというパターンです。
End of explanation
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
Generates contour plots for a list of (name, fn) functions.
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
Explanation: この CTR 関数の等高線図を見てみましょう。
End of explanation
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
Explanation: データを準備する
次に、合成データセットを作成する必要があります。シミュレーション済みのレストランのデータセットとその特徴量を生成するところから始めます。
End of explanation
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
Explanation: トレーニング、評価、およびテストデータセットを生成しましょう。検索結果でレストランが閲覧されるときに、ユーザーのエンゲージメント(クリック有りまたはクリック無し)をサンプルポイントとして記録できます。
実際には、ユーザーが全検索結果を見ることはほとんどありません。つまり、ユーザーは、使用されている現在のランキングモデルによってすでに「良い」とみなされているレストランのみを閲覧する傾向にあるでしょう。そのため、トレーニングデータセットでは「良い」レストランはより頻繁に表示されて、過剰表現されます。さらに多くの特徴量を使用する際に、トレーニングデータセットでは、特徴量空間の「悪い」部分に大きなギャップが生じてしまいます。
モデルがランキングに使用される場合、トレーニングデータセットで十分に表現されていないより均一な分布を持つ、すべての関連結果で評価されることがほとんどです。この場合、過剰に表現されたデータポイントの過適合によって一般化可能性に欠けることから、柔軟で複雑なモデルは失敗する可能性があります。この問題には、トレーニングデータセットから形状制約を拾えない場合に合理的な予測を立てられるようにモデルを誘導する形状制約を追加するドメインナレッジを適用して対処します。
この例では、トレーニングデータセットは、人気のある良いレストランとのユーザーインタラクションで構成されており、テストデータセットには、上記で説明した評価設定をシミュレーションする一様分布があります。このようなテストデータセットは、実際の問題設定では利用できないことに注意してください。
End of explanation
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
Explanation: トレーニングと評価に使用する input_fn を定義します。
End of explanation
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
Explanation: 勾配ブースティング木を適合させる
まずは、avg_rating と num_reviews の 2 つの特徴量から始めましょう。
検証とテストのメトリックを描画および計算する補助関数をいくつか作成します。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
Explanation: データセットに TensorFlow 勾配ブースティング決定木を適合できます。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
Explanation: モデルは本来の CTR の一般的な形状をキャプチャし、まともな検証メトリックを使用していますが、入力空間のいくつかの部分に直感に反する振る舞いがあります。推定される CTR は平均評価またはレビュー数が増加するにつれ降下しているところです。これは、トレーニングデータセットがうまくカバーしていない領域のサンプルポイントが不足しているためです。単に、モデルにはデータのみから正しい振る舞いを推測する術がないのです。
この問題を解決するには、モデルが平均評価とレビュー数の両方に対して単調的に増加する値を出力しなければならないように、形状制約を強制します。TFL にこれを実装する方法は、後で説明します。
DNN を適合させる
DNN 分類器で、同じ手順を繰り返すことができます。レビュー数が少なく、十分なサンプルポイントがないため、同様の、意味をなさない外挿パターンとなります。検証メトリックが木のソリューションより優れていても、テストメトリックが悪化するところに注意してください。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
Explanation: 形状制約
TensorFlow Lattice(TFL)の焦点は、トレーニングデータを超えてモデルの振る舞いを守るために形状制約を強制することに当てられます。形状制約は TFL Keras レイヤーに適用されます。その詳細は、TensorFlow の JMLR 論文をご覧ください。
このチュートリアルでは、TF 缶詰 Estimator を使用してさまざまな形状制約を説明しますが、手順はすべて、TFL Keras レイヤーから作成されたモデルで実行することができます。
ほかの TensorFlow Estimator と同様に、TFL 缶詰 Estimator では、特徴量カラムを使用して入力形式を定義し、トレーニングの input_fn を使用してデータを渡します。TFL 缶詰 Estimator を使用するには、次の項目も必要です。
モデルの構成: モデルのアーキテクチャと特徴量ごとの形状制約とレギュラライザを定義します。
特徴量分析 input_fn: TFL 初期化を行うために TF input_fn でデータを渡します。
より詳しい説明については、缶詰 Estimator のチュートリアルまたは API ドキュメントをご覧ください。
単調性
最初に、単調性形状制約を両方の特徴量に追加して、単調性に関する問題を解決します。
TFL に形状制約を強制するように指示するには、特徴量の構成に制約を指定します。次のコードは、monotonicity="increasing" を設定することによって、num_reviews と avg_rating の両方に対して単調的に出力を増加するようにする方法を示します。
End of explanation
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: CalibratedLatticeConfig を使用して、各入力にキャリブレータを適用(数値特徴量のピース単位の線形関数)してから格子レイヤーを適用して非線形的に較正済みの特徴量を融合する缶詰分類器を作成します。モデルの視覚化には、tfl.visualization を使用できます。特に、次のプロットは、缶詰分類器に含まれるトレーニング済みのキャリブレータを示します。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 制約が追加されると、推定される CTR は平均評価またはレビュー数が増加するにつれて、必ず増加するようになります。これは、キャリブレータと格子を確実に単調にすることで行われます。
収穫逓減
収穫逓減とは、特定の特徴量値を増加すると、それを高める上で得る限界利益は減少することを意味します。このケースでは、num_reviews 特徴量はこのパターンに従うと予測されるため、それに合わせてキャリブレータを構成することができます。収穫逓減を次の 2 つの十分な条件に分けることができます。
キャリブレータが単調的に増加している
キャリブレータが凹状である
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
Explanation: テストメトリックが、凹状の制約を追加することで改善しているのがわかります。予測図もグラウンドトゥルースにより似通っています。
2D 形状制約: 信頼
1 つか 2 つのレビューのみを持つレストランの 5 つ星評価は、信頼できない評価である可能性があります(レストランは実際には良くない可能性があります)が、数百件のレビューのあるレストランの 4 つ星評価にははるかに高い信頼性があります(この場合、レストランは良い可能性があります)。レストランのレビュー数によって平均評価にどれほどの信頼を寄せるかが変化することを見ることができます。
ある特徴量のより大きな(または小さな)値が別の特徴量の高い信頼性を示すことをモデルに指示する TFL 信頼制約を訓練することができます。これは、特徴量の構成で、reflects_trust_in 構成を設定することで実行できます。
End of explanation
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
Explanation: 次の図は、トレーニング済みの格子関数を示します。信頼制約により、較正済みの num_reviews のより大きな値によって、較正済みの avg_rating に対してより高い勾配が強制され、格子出力により大きな変化が生じることが期待されます。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: キャリブレータを平滑化する
では、avg_rating のキャリブレータを見てみましょう。単調的に上昇してはいますが、勾配の変化は突然起こっており、解釈が困難です。そのため、regularizer_configs にレギュラライザーをセットアップして、このキャリブレータを平滑化したいと思います。
ここでは、反りの変化を縮減するために wrinkle レギュラライザを適用します。また、laplacian レギュラライザを使用してキャリブレータを平らにし、hessian レギュラライザを使用してより線形にします。
End of explanation
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
Explanation: キャリブレータがスムーズになり、全体的な推定 CTR がグラウンドトゥルースにより一致するように改善されました。これは、テストメトリックと等高線図の両方に反映されます。
分類較正の部分単調性
これまで、モデルには 2 つの数値特徴量のみを使用してきました。ここでは、分類較正レイヤーを使用した 3 つ目の特徴量を追加します。もう一度、描画とメトリック計算用のヘルパー関数のセットアップから始めます。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 3 つ目の特徴量である dollar_rating を追加するには、TFL での分類特徴量の取り扱いは、特徴量カラムと特徴量構成の両方においてわずかに異なることを思い出してください。ここでは、ほかのすべての特徴量が固定されている場合に、"DD" レストランの出力が "D" よりも大きくなるように、部分単調性を強制します。これは、特徴量構成の monotonicity 設定を使用して行います。
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: この分類キャリブレータは、DD > D > DDD > DDDD というモデル出力の優先を示します。このセットアップではこれらは定数です。欠落する値のカラムもあることに注意してください。このチュートリアルのトレーニングデータとテストデータには欠落した特徴量はありませんが、ダウンストリームでモデルが使用される場合に値の欠落が生じたときには、モデルは欠損値の帰属を提供します。
ここでは、dollar_rating で条件付けされたモデルの予測 CTR も描画します。必要なすべての制約が各スライスで満たされているところに注意してください。
出力較正
ここまでトレーニングしてきたすべての TFL モデルでは、格子レイヤー(モデルグラフで "Lattice" と示される部分)はモデル予測を直接出力しますが、格子出力をスケーリングし直してモデル出力を送信すべきかわからないことがたまにあります。
特徴量が $log$ カウントでラベルがカウントである。
格子は頂点をほとんど使用しないように構成されているが、ラベル分布は比較的複雑である。
こういった場合には、格子出力とモデル出力の間に別のキャリブレータを追加して、モデルの柔軟性を高めることができます。ここでは、今作成したモデルにキーポイントを 5 つ使用したキャリブレータレイヤーを追加することにしましょう。また、出力キャリブレータのレギュラライザも追加して、関数の平滑性を維持します。
End of explanation |
14,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google
Step1: Data collection
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Now import Cirq, ReCirq and the module dependencies
Step6: Tasks
We organize our experiments around the concept of "tasks". A task is a unit of work which consists of loading in input data, doing data processing or data collection, and saving results. Dividing your pipeline into tasks can be more of an art than a science. However, some rules of thumb can be observed
Step7: There are some things worth noting with this TasknameTask class.
We use the utility annotation @json_serializable_dataclass, which wraps the vanilla @dataclass annotation, except it permits saving and loading instances of ReadoutScanTask using Cirq's JSON serialization facilities. We give it an appropriate namespace to distinguish between top-level cirq objects.
Data members are all primitive or near-primitive data types
Step9: All of the I/O functions take a base_dir parameter to support full control
over where things are saved / loaded. Your script will use DEFAULT_BASE_DIR.
Typically, data collection (i.e. the code in this notebook) would be in a script so you can run it headless for a long time. Typically, analysis is done in one or more notebooks because of their ability to display rich output. By saving data correctly, your analysis and plotting code can run fast and interactively.
Running a Task
Each task is comprised not only of the Task object, but also a function that executes the task. For example, here we define the process by which we collect data.
There should only be one required argument
Step11: The driver script
Typically, the above classes and functions will live in a Python module; something like recirq/readout_scan/tasks.py. You can then have one or more "driver scripts" which are actually executed.
View the driver script as a configuration file that specifies exactly which parameters you want to run. You can see that below, we've formatted the construction of all the task objects to look like a configuration file. This is no accident! As noted in the docstring, the user can be expected to twiddle values defined in the script. Trying to factor this out into an ini file (or similar) is more effort than it's worth. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google
End of explanation
try:
import recirq
except ImportError:
!pip install --quiet git+https://github.com/quantumlib/ReCirq
Explanation: Data collection
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/guide/data_collection"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/guide/data_collection.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/guide/data_collection.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/guide/data_collection.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Following a set of idioms and using common utilities when running NISQy quantum
experiments is advantageous to:
Avoid duplication of effort for common tasks like data saving and loading
Enable easy data sharing
Reduce cognitive load of onboarding onto a new experiment. The 'science'
part is isolated from an idiomatic 'infrastructure' part.
Idioms and conventions are more flexible than a strict framework. You
don't need to do everything exactly.
This notebook shows how to design the infrastructure to support a simple experiment.
Setup
Install the ReCirq package:
End of explanation
import os
import numpy as np
import sympy
import cirq
import recirq
Explanation: Now import Cirq, ReCirq and the module dependencies:
End of explanation
@recirq.json_serializable_dataclass(namespace='recirq.readout_scan',
registry=recirq.Registry,
frozen=True)
class ReadoutScanTask:
Scan over Ry(theta) angles from -pi/2 to 3pi/2 tracing out a sinusoid
which is primarily affected by readout error.
See Also:
:py:func:`run_readout_scan`
Attributes:
dataset_id: A unique identifier for this dataset.
device_name: The device to run on, by name.
n_shots: The number of repetitions for each theta value.
qubit: The qubit to benchmark.
resolution_factor: We select the number of points in the linspace
so that the special points: (-1/2, 0, 1/2, 1, 3/2) * pi are
always included. The total number of theta evaluations
is resolution_factor * 4 + 1.
dataset_id: str
device_name: str
n_shots: int
qubit: cirq.GridQubit
resolution_factor: int
@property
def fn(self):
n_shots = _abbrev_n_shots(n_shots=self.n_shots)
qubit = _abbrev_grid_qubit(self.qubit)
return (f'{self.dataset_id}/'
f'{self.device_name}/'
f'q-{qubit}/'
f'ry_scan_{self.resolution_factor}_{n_shots}')
# Define the following helper functions to make nicer `fn` keys
# for the tasks:
def _abbrev_n_shots(n_shots: int) -> str:
Shorter n_shots component of a filename
if n_shots % 1000 == 0:
return f'{n_shots // 1000}k'
return str(n_shots)
def _abbrev_grid_qubit(qubit: cirq.GridQubit) -> str:
Formatted grid_qubit component of a filename
return f'{qubit.row}_{qubit.col}'
Explanation: Tasks
We organize our experiments around the concept of "tasks". A task is a unit of work which consists of loading in input data, doing data processing or data collection, and saving results. Dividing your pipeline into tasks can be more of an art than a science. However, some rules of thumb can be observed:
A task should be at least 30 seconds worth of work but less than ten minutes worth of work. Finer division of tasks can make your pipelines more composable, more resistant to failure, easier to restart from failure, and easier to parallelize. Coarser division of tasks can amortize the cost of input and output data serialization and deserialization.
A task should be completely determined by a small-to-medium collection of primitive data type parameters. In fact, these parameters will represent instances of tasks and will act as "keys" in a database or on the filesystem.
Practically, a task consists of a TasknameTask (use your own name!) dataclass and a function which takes an instance of such a class as its argument, does the requisite data processing, and saves its results. Here, we define the ReadoutScanTask class with members that tell us exactly what data we want to collect.
End of explanation
EXPERIMENT_NAME = 'readout-scan'
DEFAULT_BASE_DIR = os.path.expanduser(f'~/cirq-results/{EXPERIMENT_NAME}')
Explanation: There are some things worth noting with this TasknameTask class.
We use the utility annotation @json_serializable_dataclass, which wraps the vanilla @dataclass annotation, except it permits saving and loading instances of ReadoutScanTask using Cirq's JSON serialization facilities. We give it an appropriate namespace to distinguish between top-level cirq objects.
Data members are all primitive or near-primitive data types: str, int, GridQubit. This sets us up well to use ReadoutScanTask in a variety of contexts where it may be tricky to use too-abstract data types. First, these simple members allow us to map from a task object to a unique /-delimited string appropriate for use as a filename or a unique key. Second, these parameters are immediately suitable to serve as columns in a pd.DataFrame or a database table.
There is a property named fn which provides a mapping from ReadoutScanTask instances to strings suitable for use as filenames. In fact, we will use this to save per-task data. Note that every dataclass member variable is used in the construction of fn. We also define some utility methods to make more human-readable strings. There must be a 1:1 mapping from task attributes to filenames. In general it is easy to go from a Task object to a filename. It should be possible to go the other way, although filenames prioritize readability over parsability; so in general this relationship won’t be used.
We begin with a dataset_id field. Remember, instances of ReadoutScanTask must completely capture a task. We may want to run the same qubit for the same number of shots on the same device on two different days, so we include dataset_id to capture the notion of time and/or the state of the universe for tasks. Each family of tasks should include dataset_id as its first parameter.
Namespacing
A collection of tasks can be grouped into an "experiment" with a particular name.
This defines a folder ~/cirq-results/[experiment_name]/ under which data will be stored.
If you were storing data in a database, this might be the table name.
The second level of namespacing comes from tasks' dataset_id field which groups together an immutable collection of results taken at roughly the same time.
By convention, you can define the following global variables in your experiment scripts:
End of explanation
def run_readout_scan(task: ReadoutScanTask,
base_dir=None):
Execute a :py:class:`ReadoutScanTask` task.
if base_dir is None:
base_dir = DEFAULT_BASE_DIR
if recirq.exists(task, base_dir=base_dir):
print(f"{task} already exists. Skipping.")
return
# Create a simple circuit
theta = sympy.Symbol('theta')
circuit = cirq.Circuit([
cirq.ry(theta).on(task.qubit),
cirq.measure(task.qubit, key='z')
])
# Use utilities to map sampler names to Sampler objects
sampler = recirq.get_sampler_by_name(device_name=task.device_name)
# Use a sweep over theta values.
# Set up limits so we include (-1/2, 0, 1/2, 1, 3/2) * pi
# The total number of points is resolution_factor * 4 + 1
n_special_points: int = 5
resolution_factor = task.resolution_factor
theta_sweep = cirq.Linspace(theta, -np.pi / 2, 3 * np.pi / 2,
resolution_factor * (n_special_points - 1) + 1)
thetas = np.asarray([v for ((k, v),) in theta_sweep.param_tuples()])
flat_circuit, flat_sweep = cirq.flatten_with_sweep(circuit, theta_sweep)
# Run the jobs
print(f"Collecting data for {task.qubit}", flush=True)
results = sampler.run_sweep(program=flat_circuit, params=flat_sweep,
repetitions=task.n_shots)
# Save the results
recirq.save(task=task, data={
'thetas': thetas,
'all_bitstrings': [
recirq.BitArray(np.asarray(r.measurements['z']))
for r in results]
}, base_dir=base_dir)
Explanation: All of the I/O functions take a base_dir parameter to support full control
over where things are saved / loaded. Your script will use DEFAULT_BASE_DIR.
Typically, data collection (i.e. the code in this notebook) would be in a script so you can run it headless for a long time. Typically, analysis is done in one or more notebooks because of their ability to display rich output. By saving data correctly, your analysis and plotting code can run fast and interactively.
Running a Task
Each task is comprised not only of the Task object, but also a function that executes the task. For example, here we define the process by which we collect data.
There should only be one required argument: task whose type is the class defined to completely specify the parameters of a task. Why define a separate class instead of just using normal function arguments?
Remember this class has a fn property that gives a unique string for parameters. If there were more arguments to this function, there would be inputs not specified in fn and the data output path could be ambiguous.
By putting the arguments in a class, they can easily be serialized as metadata alongside the output of the task.
The behavior of the function must be completely determined by its inputs.
This is why we put a dataset_id field in each task that's usually something resembling a timestamp. It captures the 'state of the world' as an input.
It's recommended that you add a check to the beginning of each task function to check if the output file already exists. If it does and the output is completely determined by its inputs, then we can deduce that the task is already done. This can save time for expensive classical pre-computations or it can be used to re-start a collection of tasks where only some of them had completed.
In general, you have freedom to implement your own logic in these functions, especially between the beginning (which is code for loading in input data) and the end (which is always a call to recirq.save()). Don't go crazy. If there's too much logic in your task execution function, consider factoring out useful functionality into the main library.
End of explanation
# Put in a file named run-readout-scan.py
import datetime
import cirq_google as cg
MAX_N_QUBITS = 5
def main():
Main driver script entry point.
This function contains configuration options and you will likely need
to edit it to suit your needs. Of particular note, please make sure
`dataset_id` and `device_name`
are set how you want them. You may also want to change the values in
the list comprehension to set the qubits.
# Uncomment below for an auto-generated unique dataset_id
# dataset_id = datetime.datetime.now().isoformat(timespec='minutes')
dataset_id = '2020-02-tutorial'
data_collection_tasks = [
ReadoutScanTask(
dataset_id=dataset_id,
device_name='Syc23-simulator',
n_shots=40_000,
qubit=qubit,
resolution_factor=6,
)
for qubit in cg.Sycamore23.qubits[:MAX_N_QUBITS]
]
for dc_task in data_collection_tasks:
run_readout_scan(dc_task)
if __name__ == '__main__':
main()
Explanation: The driver script
Typically, the above classes and functions will live in a Python module; something like recirq/readout_scan/tasks.py. You can then have one or more "driver scripts" which are actually executed.
View the driver script as a configuration file that specifies exactly which parameters you want to run. You can see that below, we've formatted the construction of all the task objects to look like a configuration file. This is no accident! As noted in the docstring, the user can be expected to twiddle values defined in the script. Trying to factor this out into an ini file (or similar) is more effort than it's worth.
End of explanation |
14,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: To specify the experiments, 3 paramters need to be defined
Step2: Other parameters of the experiment
Step3: Load all data
Step4: Code for ML Ninja
Step5: Code for ML Ninja
Step6: Initialise the experiment
Step7: Select the training and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.
Step8: Initialise the environment for testing the strategies.
Step9: Experiment with RL
This class implements the environment simulating a user annotating images of PASCAL dataset, following the
OpenAI Gym interface for RL environments.
States are a combination of features of an image and a proposed box for annotation.
Actions (0,1) correspond to (do box verification, do extreme clicking).
Reward is -time per iteration.
Reward is 0 when annotation is obtained.
Episode terminates when annotation for an image is obtained
Training RL agent
The following code block runs the training process on the AnnotatingPASCAL environment, using the DQN as an agent.
First, some initial episodes are taken, storing their results in the ReplayBuffer, this warm-starts the replay buffer with some experience, so that early stages of learning do not overfit.
Next, many training iterations are performed. Each training iteration has several phases
Step10: Test the learnt agent | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from __future__ import division
from __future__ import print_function
import math
import gym
from gym import spaces
import pandas as pd
import tensorflow as tf
from IPython import display
import time
from third_party import np_box_ops
import annotator, detector, dialog, environment
Explanation: Copyright 2018 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Experiment 1: fixed detector in many scenarios
This notebook computes the performance of the fixed strategies in various scenarios. This experiment is described in Sec. 5.2 of CVPR submission "Learning Intelligent Dialogs for Bounding Box Annotation".
End of explanation
# desired quality: high (min_iou=0.7) and low (min_iou=0.5)
min_iou = 0.7 # @param ["0.5", "0.7"]
# drawing speed: high (time_draw=7) and low (time_draw=25)
time_draw = 7 # @param ["7", "25"]
# if detector is weak, then we use best MIL, if it is strong, we use detector trained on PASCAL 2012
detector_weak = False # @param ['False']
Explanation: To specify the experiments, 3 paramters need to be defined:
detector
type of drawing
desired quality of bounding boxes (only the strong detector can be used in this notebook).
All together, it gives 8 possible experiment, 6 of which were presented in the paper.
End of explanation
random_seed = 805 # global variable that fixes the random seed everywhere for replroducibility of results
# what kind of features will be used to represent the state
# numerical values 1-20 correspond to one hot encoding of class
predictive_fields = ['prediction_score', 'relative_size', 'avg_score', 'dif_avg_score', 'dif_max_score', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
time_verify = 1.8 # @param
# select one of the 10 folds
fold = 8 # @param
Explanation: Other parameters of the experiment
End of explanation
# Download GT:
# wget wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_gt_for_iad.h5
# Download detections with features
# wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_proposals_plus_features_for_iad.h5
download_dir = ''
ground_truth = pd.read_hdf(download_dir + 'pascal_gt_for_iad.h5', 'ground_truth')
box_proposal_features = pd.read_hdf(download_dir + 'pascal_proposals_plus_features_for_iad.h5', 'box_proposal_features')
ground_truth.sample(n=3)
box_proposal_features.sample(n=3)
Explanation: Load all data
End of explanation
class Minibatch:
def __init__(self, state, action, reward, next_state, terminal):
self.state = state
self.action = action
self.reward = reward
self.next_state = next_state
self.terminal = terminal
def __str__(self):
return str(zip(self.state, self.action, self.reward, self.next_state, self.terminal))
def __getitem__(self, x):
return self.state[x], self.action[x], self.reward[x], self.next_state[x], self.terminal[x]
class ReplayBuffer:
def __init__(self, buffer_size=1e4):
self.buffer_size = int(buffer_size)
self.n = 0
self.write_index = 0
# Initialize numpy arrays to the full maximum size of the ReplayBuffer
def _init_nparray(self, state, action, reward, next_state, terminal):
# For each column, initialize the column for entire buffer_size.
self.all_states = np.array([state] * self.buffer_size)
self.all_actions = np.array([action] * self.buffer_size)
self.all_rewards = np.array([reward] * self.buffer_size)
self.all_next_states = np.array([next_state] * self.buffer_size)
self.all_terminals = np.array([terminal] * self.buffer_size)
self.n = 1
self.write_index = 1
def store_transition(self, state, action, reward, next_state, terminal):
# If buffer arrays not yet initialized, initialize them
if self.n == 0:
self._init_nparray(state, action, reward, next_state, terminal)
return
self.all_states[self.write_index] = state
self.all_actions[self.write_index] = action
self.all_rewards[self.write_index] = reward
self.all_next_states[self.write_index] = next_state
self.all_terminals[self.write_index] = terminal
self.write_index += 1
if self.write_index >= self.buffer_size:
self.write_index = 0
# Keep track of the max index to be used for sampling.
if self.n < self.buffer_size:
self.n += 1
def sample_minibatch(self, batch_size=32):
minibatch_indices = np.random.permutation(self.n)[:batch_size]
minibatch = Minibatch(
self.all_states[minibatch_indices],
self.all_actions[minibatch_indices],
self.all_rewards[minibatch_indices],
self.all_next_states[minibatch_indices],
self.all_terminals[minibatch_indices],
)
return minibatch
Explanation: Code for ML Ninja: Replay Buffer
The ReplayBuffer class is used for storing transitions of interaction with the environment. Each transition consists of:
- state - The state that the environment gave the agent.
- action - The action taken by the agent.
- reward - The reward given for taking the action from the state.
- next_state - The state resulting from the action.
- terminal - A flag indicating whether the environment terminated with this transition.
The ReplayBuffer class provides a random samples of transitions in a Minibatch.
End of explanation
class DQN:
def __init__(self,
observation_space,
action_space,
learning_rate=1e-3,
batch_size=32,
is_target_dqn=False,
hidden_layer_sizes=[30, 30],
discount_rate=0.99,
target_copy_factor=0.001,
session=None
):
self.observation_space = observation_space
self.action_space = action_space
self.learning_rate = learning_rate
self.batch_size = batch_size
self.is_target_dqn = is_target_dqn
self.hidden_layer_sizes = hidden_layer_sizes
self.discount_rate = discount_rate
self.target_copy_factor = target_copy_factor
self.session = session or tf.Session()
self._initialized = False
if not is_target_dqn:
self._target_dqn = DQN(
observation_space=observation_space,
action_space=action_space,
learning_rate=learning_rate,
batch_size=batch_size,
hidden_layer_sizes=hidden_layer_sizes,
is_target_dqn=True,
session=self.session,
)
# Make the net for inference and training. Requires that target DQN made first.
self._make_net()
def _make_net(self):
observation_length = sum([len(observ) for observ in self.observation_space.sample()])
var_scope_name = "dqn" if not self.is_target_dqn else "target_dqn"
with tf.variable_scope(var_scope_name):
# Placeholder for states, first dimension for batch size, second for observation vector length.
self._state_placeholder = tf.placeholder(dtype=tf.float32, shape=(self.batch_size, observation_length))
# Make first hidden layer
self._layers = []
self._layers.append(tf.contrib.layers.fully_connected(
inputs = self._state_placeholder,
num_outputs = self.hidden_layer_sizes[0],
trainable = not self.is_target_dqn,
variables_collections = [var_scope_name],
scope = "layer0"
))
# Make subsequent hidden layers.
for i in xrange(1, len(self.hidden_layer_sizes)):
self._layers.append(tf.contrib.layers.fully_connected(
inputs = self._layers[-1],
num_outputs = self.hidden_layer_sizes[i],
trainable = not self.is_target_dqn,
variables_collections = [var_scope_name],
scope = "layer{}".format(i)
))
# Make action-value predictions layer.
self._av_predictions = tf.contrib.layers.linear(
inputs = self._layers[-1],
num_outputs = self.action_space.n,
trainable = not self.is_target_dqn,
variables_collections = [var_scope_name],
scope = "av_predictions"
)
# If not the target DQN, make the placeholders and ops for computing Bellman loss and training.
if not self.is_target_dqn:
self._action_placeholder = tf.placeholder(dtype=tf.int32, shape=(self.batch_size))
self._reward_placeholder = tf.placeholder(dtype=tf.float32, shape=(self.batch_size))
self._terminal_placeholder = tf.placeholder(dtype=tf.bool, shape=(self.batch_size))
ones = tf.ones(shape=(self.batch_size))
zeros = tf.zeros(shape=(self.batch_size))
# Contains 1 where not terminal, 0 where terminal. (batch_size x 1)
terminal_mask = tf.where(self._terminal_placeholder, zeros, ones)
# Contains 1 where action was taken. (batch_size x action_space.n)
action_taken_mask = tf.one_hot(
indices = self._action_placeholder,
depth = self.action_space.n,
on_value = 1.0,
off_value = 0.0,
dtype = tf.float32
)
# Contains 1 where action was not taken. (batch_size x action_space.n)
action_not_taken_mask = tf.one_hot(
indices = self._action_placeholder,
depth = self.action_space.n,
on_value = 0.0,
off_value = 1.0,
dtype = tf.float32
)
# For samples that are not terminal, contains max next step action value predictions. (batch_size x 1)
masked_target_av_predictions = tf.reduce_max(self._target_dqn._av_predictions, reduction_indices=[1]) * terminal_mask
# Target values for actions taken. (batch_size x 1)
# = r + discount_rate * Q_target(s', a') , for non-terminal transitions
# = r , for terminal transitions
actions_taken_targets = self._reward_placeholder + self.discount_rate * masked_target_av_predictions
actions_taken_targets = tf.reshape(actions_taken_targets, (self.batch_size, 1))
# Target values for all actions. (batch_size x action_space.n)
# = the target predicted av, for indices corresponding to actions taken
# = the current predicted av, for indices corresponding to actions not taken
all_action_targets = actions_taken_targets * action_taken_mask + self._av_predictions * action_not_taken_mask
self._all_action_targets = all_action_targets
# Define error, loss
error = all_action_targets - self._av_predictions
self._loss = tf.reduce_sum(tf.square(error))
# Define train op
opt = tf.train.AdamOptimizer(self.learning_rate)
self._opt = opt
self._train_op = opt.minimize(self._loss, var_list=tf.get_collection('dqn'))
# Construct ops to copy over weighted average of parameter values to target net
copy_factor = self.target_copy_factor
copy_factor_complement = 1 - copy_factor
self._copy_ops = [target_var.assign(copy_factor * my_var + copy_factor_complement * target_var)
for (my_var, target_var)
in zip(tf.get_collection('dqn'), tf.get_collection('target_dqn'))]
def _copy_to_target_dqn(self):
assert not self.is_target_dqn, "cannot call _copy_to_target_dqn on target DQN"
self.session.run(self._copy_ops)
def _check_initialized(self):
if self._initialized:
return
self.session.run(tf.initialize_all_variables())
self._initialized = True
def get_action(self, state):
self._check_initialized()
state_batch = (state,) * self.batch_size
av_predictions = self.session.run(
self._av_predictions,
feed_dict = { self._state_placeholder : state_batch }
)
# av_predictions currently holds a whole minibatch. Extract first row.
av_predictions = av_predictions[0]
# Choose the index of max action.
max_action = 0
max_action_value = -float("inf")
for i in xrange(self.action_space.n):
if av_predictions[i] > max_action_value:
max_action = i
max_action_value = av_predictions[max_action]
return max_action
def save_params(self):
params = {}
for var in tf.get_collection('dqn'):
params[var.name] = self.session.run(var)
return params
def load_params(self, params):
for var in tf.get_collection('dqn'):
self.session.run(var.assign(params[var.name]))
def train(self, minibatch):
assert not self.is_target_dqn, "cannot call train() on target DQN"
self._check_initialized()
# Run a step of optimization with the minibatch fields.
loss, _ = self.session.run(
[self._loss, self._train_op],
feed_dict = {
self._state_placeholder : minibatch.state,
self._action_placeholder : minibatch.action,
self._reward_placeholder : minibatch.reward,
self._target_dqn._state_placeholder : minibatch.next_state,
self._terminal_placeholder : minibatch.terminal,
},
)
self._copy_to_target_dqn()
return loss
Explanation: Code for ML Ninja: DQN Class
The following code block implements Deepmind's DQN algorithm:
https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
The algorithm uses Q-learning to train an estimator Q(s,a), for actions and states in the environment.
End of explanation
the_annotator = annotator.AnnotatorSimple(ground_truth, random_seed, time_verify, time_draw, min_iou)
the_detector = detector.Detector(box_proposal_features, predictive_fields)
image_class = ground_truth[['image_id', 'class_id']]
image_class = image_class.drop_duplicates()
Explanation: Initialise the experiment
End of explanation
# get a list of unique images
unique_image = image_class['image_id'].drop_duplicates()
# a list of image+class pairs
image_class_array = image_class.values[:,0]
if fold==1:
index_image_class1 = 0
else:
image_division1 = unique_image.iloc[502+501*(fold-2)]
index_image_class1 = np.searchsorted(image_class_array, image_division1, side='right')
if fold==10:
index_image_class2 = len(image_class_array)
else:
image_division2 = unique_image.iloc[502+501*(fold-1)]
index_image_class2 = np.searchsorted(image_class_array, image_division2, side='right')
# the selected fold becomes the training set
image_class_trainval = image_class.iloc[index_image_class1:index_image_class2]
# the other 9 folds become test set
image_class_test = pd.concat([image_class.iloc[0:index_image_class1],image_class.iloc[index_image_class2:]])
n_train = 500 # reserve samples for training
# permute data to get training and validation subsets
all_indeces_permuted = np.random.permutation(len(image_class_trainval))
indeces_for_train = all_indeces_permuted[0:n_train]
indeces_for_val = all_indeces_permuted[n_train:]
image_class_train = image_class_trainval.iloc[indeces_for_train]
image_class_val = image_class_trainval.iloc[indeces_for_val]
Explanation: Select the training and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.
End of explanation
env_train = environment.AnnotatingDataset(the_annotator, the_detector, image_class_train)
env_val = environment.AnnotatingDataset(the_annotator, the_detector, image_class_val)
env_test = environment.AnnotatingDataset(the_annotator, the_detector, image_class_test)
Explanation: Initialise the environment for testing the strategies.
End of explanation
# Warm start episodes
tf.reset_default_graph()
# Initialize the DQN agent
agent = DQN(env_train.observation_space, env_train.action_space,
batch_size=80, # @param
learning_rate=1e-3, # @param
hidden_layer_sizes=[30, 30], # @param
discount_rate=1,
)
REPLAY_BUFFER_SIZE = 1e4 # @param
replay_buffer = ReplayBuffer(buffer_size=REPLAY_BUFFER_SIZE)
num_action_classes = env_train.action_space.n
# Warm-start the replay buffer with some random actions.
WARM_START_EPISODES = 100 # @param
for _ in xrange(WARM_START_EPISODES):
state = env_train.reset()
terminal = False
while not terminal:
# Choose a random action
action = np.random.randint(0, num_action_classes)
next_state, reward, terminal, _ = env_train.step(action)
# Store the transition in the replay buffer.
replay_buffer.store_transition(state, action, reward, next_state, terminal)
# Get ready for next step
state = next_state
# Run training and validation episodes
# Run multiple training iterations. Each iteration consits of:
# - training episodes (with exploration)
# - neural network updates
# - test episodes for evaluating performance
# Exploration rate
EPSILON = 0.2 # @param
# Can experiemnt with dynamically changing the eps: EPSILON -EPSILON*(iteration/TRAINING_ITERATIONS)
TRAINING_ITERATIONS = 500 # @param
# at each training ietration TRAINING_EPISODES_PER_ITERATION episodes are simulated
TRAINING_EPISODES_PER_ITERATION = 10 # @param
# at each training iteration NN_UPDATES_PER_ITERATION gradient steps are made
NN_UPDATES_PER_ITERATION = 30 # @param
train_episode_rewards = []
val_episode_rewards = []
agent_params = {}
best_iteration = 0
best_time = -float("inf")
# can set the number of samples to be used for estimating training error or validation error to be smaller for faster executions
n_for_trainerror = 200 #len(image_class_train)
n_for_valerror = 200 #len(image_class_val)
for iteration in xrange(TRAINING_ITERATIONS):
# Simulate training episodes.
for _ in xrange(TRAINING_EPISODES_PER_ITERATION):
state = env_train.reset()
terminal = False
while not terminal:
action = agent.get_action(state)
# With epsilon probability, take a random action.
if np.random.ranf() < EPSILON:
action = np.random.randint(0, num_action_classes)
next_state, reward, terminal, _ = env_train.step(action)
replay_buffer.store_transition(state, action, reward, next_state, terminal)
state = next_state
# Do neural network updates
for _ in xrange(NN_UPDATES_PER_ITERATION):
minibatch = replay_buffer.sample_minibatch(agent.batch_size)
agent.train(minibatch)
# Store the agent params from this iteration.
agent_params[iteration] = agent.save_params()
# Compute the training and validation error 20 times during the training iterations
if (iteration+1) % (TRAINING_ITERATIONS / 20) == 0:
print('Episode ', iteration, end = ': ')
# Run episodes to evaluate train reward.
train_reward = 0
for i in xrange(n_for_trainerror):
state = env_train.reset(current_index=i)
terminal = False
while not terminal:
action = agent.get_action(state)
next_state, reward, terminal, _ = env_train.step(action)
state = next_state
train_reward += reward
# Store the train episode stats.
print('average trainign error = ', - train_reward/n_for_trainerror)
train_episode_rewards.append(train_reward/n_for_trainerror)
# Run episodes to evaluate validation reward.
val_reward = 0
for i in xrange(n_for_valerror):
state = env_val.reset(current_index=i)
terminal = False
while not terminal:
action = agent.get_action(state)
next_state, reward, terminal, _ = env_val.step(action)
state = next_state
val_reward += reward
# Store the test episode stats.
val_episode_rewards.append(val_reward/n_for_valerror)
# remember the iteration with the lowest validation error for early stopping
if val_reward/n_for_valerror>best_time:
best_time = val_reward/n_for_valerror
best_iteration = iteration
# plot the training and validation errors
plt.plot(train_episode_rewards, 'b', label = 'train reward')
plt.plot(val_episode_rewards, 'g', label = 'validation reward')
Explanation: Experiment with RL
This class implements the environment simulating a user annotating images of PASCAL dataset, following the
OpenAI Gym interface for RL environments.
States are a combination of features of an image and a proposed box for annotation.
Actions (0,1) correspond to (do box verification, do extreme clicking).
Reward is -time per iteration.
Reward is 0 when annotation is obtained.
Episode terminates when annotation for an image is obtained
Training RL agent
The following code block runs the training process on the AnnotatingPASCAL environment, using the DQN as an agent.
First, some initial episodes are taken, storing their results in the ReplayBuffer, this warm-starts the replay buffer with some experience, so that early stages of learning do not overfit.
Next, many training iterations are performed. Each training iteration has several phases:
- Run training episodes. Each transition observed is stored in the replay buffer. With epsilon probability, at each timestep, a random action is taken for exploration.
- Run several updates to the DQN neural network parameters, with minibatches of transitions from the ReplayBuffer.
- Run testing episodes. Every action is taken directly from the current agent. Store the returns from the environment.
Finally, an average of the test returns and nn train error are plotted over the iteration index.
Hyperparameter Explanation:
- batch_size: How many samples are pulled from the Replay Buffer for one SGD step on neural network parameters.
- learning_rate: Constant that adjusts how large a step SGD takes per iteration.
- hidden_layer_sizes: A list of sizes for hidden layers of the neural network.
- discount_rate: Per-timestep discounting of future reward values. A lower value means that short-term rewards will be prioritized.
- WARM_START_EPISODES: How many episodes of random experience to gather before training starts, so that the Replay Buffer has enough data.
- TRAINING_ITERATIONS: How many iterations of training/testing to do.
- TRAINING_EPISODES_PER_ITERATION: How many episodes to run before training.
- TEST_EPISODES_PER_ITERATION: How many episodes to run for averaging performance of the current Q-function approximation.
- NN_UPDATES_PER_ITERATION: How many SGD steps to take per iteration.
End of explanation
%output_height 300
# load the agent from the iteration with the lowest validation error
print('Best iteration = ', best_iteration)
print('Best validation time = ', best_time)
agent.load_params(agent_params[best_iteration])
test_reward = 0
for i in xrange(len(image_class_test)):
state = env_test.reset(current_index=i)
terminal = False
print('Episode ', i, end = ': ')
while not terminal:
# Take an environment step
action = agent.get_action(state)
if action==0:
print('V', end='')
elif action==1:
print('D', end='')
next_state, reward, terminal, _ = env_test.step(action)
state = next_state
test_reward += reward
print()
print('Total duration of all episodes = ', -test_reward)
print('Average episode duration = ', -test_reward/len(image_class_test))
Explanation: Test the learnt agent
End of explanation |
14,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gensim Tutorial on Online Non-Negative Matrix Factorization
This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
What's in this tutorial?
Introduction
Step1: Dataset preparation
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora
Step2: Create a train/test split
Step3: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
Step4: Dictionary compilation
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
Step5: Create training corpus
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus
Step6: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python
Step7: View the learned topics
Step8: Evaluation measure
Step9: Topic inference on new documents
With the NMF model trained, let's fetch one news document not seen during training, and infer its topic vector.
Step10: Word topic inference
Similarly, we can inspect the topic distribution assigned to a vocabulary term
Step11: Internal NMF state
Density is a fraction of non-zero elements in a matrix.
Step12: Term-topic matrix of shape (words, topics).
Step13: Topic-document matrix for the last batch of shape (topics, batch)
Step14: 3. Benchmarks
Gensim NMF vs Sklearn NMF vs Gensim LDA
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
Metrics
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above
Step15: Run the models
Step16: Benchmark results
Step17: Main insights
LDA has the best coherence of all models.
LSI has the best l2 norm and f1 performance on downstream task (it's factors aren't non-negative though).
Gensim NMF, Sklearn NMF and LSI has a bit larger memory footprint than that of LDA.
Gensim NMF, Sklearn NMF and LSI are much faster than LDA.
Learned topics
Let's inspect the 5 topics learned by each of the three models
Step18: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA and LSI look a bit worse.
4. NMF on English Wikipedia
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia
Step19: Load the Wikipedia dump
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space)
Step20: Print the titles and sections of the first Wikipedia article, as a little sanity check
Step22: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article
Step23: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours
Step24: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage
Step26: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space
Step27: Save preprocessed Wikipedia in scipy.sparse format
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
Step28: Metrics
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above
Step29: Define common parameters, to be shared by all evaluated models
Step30: Train Gensim NMF model and record its metrics
Wikipedia training
Train Gensim NMF model and record its metrics
Step31: Train Gensim LSI model and record its metrics
Step32: Train Gensim LDA and record its metrics
Step33: Train Sklearn NMF and record its metrics
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
Step34: Wikipedia results
Step35: Insights
Gensim's online NMF outperforms all other models in terms of speed and memory foorprint size.
Compared to Sklearn's NMF
Step36: It seems all four models successfully learned useful topics from the Wikipedia corpus.
5. And now for something completely different
Step38: Modified face decomposition notebook
Adapted from the excellent Scikit-learn tutorial (BSD license) | Python Code:
import logging
import time
from contextlib import contextmanager
import os
from multiprocessing import Process
import psutil
import numpy as np
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
import gensim.downloader
from gensim import matutils, utils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, TfidfModel, LsiModel
from gensim.models.basemodel import BaseTopicModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Gensim Tutorial on Online Non-Negative Matrix Factorization
This notebooks explains basic ideas behind the open source NMF implementation in Gensim, including code examples for applying NMF to text processing.
What's in this tutorial?
Introduction: Why NMF?
Code example on 20 Newsgroups
Benchmarks against Sklearn's NMF and Gensim's LDA
Large-scale NMF training on the English Wikipedia (sparse text vectors)
NMF on face decomposition (dense image vectors)
1. Introduction to NMF
What's in a name?
Gensim's Online Non-Negative Matrix Factorization (NMF, NNMF, ONMF) implementation is based on Renbo Zhao, Vincent Y. F. Tan: Online Nonnegative Matrix Factorization with Outliers, 2016 and is optimized for extremely large, sparse, streamed inputs. Such inputs happen in NLP with unsupervised training on massive text corpora.
Why Online? Because corpora and datasets in modern ML can be very large, and RAM is limited. Unlike batch algorithms, online algorithms learn iteratively, streaming through the available training examples, without loading the entire dataset into RAM or requiring random-access to the data examples.
Why Non-Negative? Because non-negativity leads to more interpretable, sparse "human-friendly" topics. This is in contrast to e.g. SVD (another popular matrix factorization method with super-efficient implementation in Gensim), which produces dense negative factors and thus harder-to-interpret topics.
Matrix factorizations are the corner stone of modern machine learning. They can be used either directly (recommendation systems, bi-clustering, image compression, topic modeling…) or as internal routines in more complex deep learning algorithms.
How ONNMF works
Terminology:
- corpus is a stream of input documents = training examples
- batch is a chunk of input corpus, a word-document matrix mini-batch that fits in RAM
- W is a word-topic matrix (to be learned; stored in the resulting model)
- h is a topic-document matrix (to be learned; not stored, but rather inferred for documents on-the-fly)
- A, B - matrices that accumulate information from consecutive chunks. A = h.dot(ht), B = v.dot(ht).
The idea behind the algorithm is as follows:
```
Initialize W, A and B matrices
for batch in input corpus batches:
infer h:
do coordinate gradient descent step to find h that minimizes ||batch - Wh|| in L2 norm
bound h so that it is non-negative
update A and B:
A = h.dot(ht)
B = batch.dot(ht)
update W:
do gradient descent step to find W that minimizes ||0.5*trace(WtWA) - trace(WtB)|| in L2 norm
```
2. Code example: NMF on 20 Newsgroups
Preprocessing
Let's import the models we'll be using throughout this tutorial (numpy==1.14.2, matplotlib==3.0.2, pandas==0.24.1, sklearn==0.19.1, gensim==3.7.1) and set up logging at INFO level.
Gensim uses logging generously to inform users what's going on. Eyeballing the logs is a good sanity check, to make sure everything is working as expected.
Only numpy and gensim are actually needed to train and use NMF. The other imports are used only to make our life a little easier in this tutorial.
End of explanation
newsgroups = gensim.downloader.load('20-newsgroups')
categories = [
'alt.atheism',
'comp.graphics',
'rec.motorcycles',
'talk.politics.mideast',
'sci.space'
]
categories = {name: idx for idx, name in enumerate(categories)}
Explanation: Dataset preparation
Let's load the notorious 20 Newsgroups dataset from Gensim's repository of pre-trained models and corpora:
End of explanation
random_state = RandomState(42)
trainset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'train'
])
random_state.shuffle(trainset)
testset = np.array([
{
'data': doc['data'],
'target': categories[doc['topic']],
}
for doc in newsgroups
if doc['topic'] in categories and doc['set'] == 'test'
])
random_state.shuffle(testset)
Explanation: Create a train/test split:
End of explanation
train_documents = [preprocess_string(doc['data']) for doc in trainset]
test_documents = [preprocess_string(doc['data']) for doc in testset]
Explanation: We'll use very simple preprocessing with stemming to tokenize each document. YMMV; in your application, use whatever preprocessing makes sense in your domain. Correctly preparing the input has major impact on any subsequent ML training.
End of explanation
dictionary = Dictionary(train_documents)
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=20000) # filter out too in/frequent tokens
Explanation: Dictionary compilation
Let's create a mapping between tokens and their ids. Another option would be a HashDictionary, saving ourselves one pass over the training documents.
End of explanation
tfidf = TfidfModel(dictionary=dictionary)
train_corpus = [
dictionary.doc2bow(document)
for document
in train_documents
]
test_corpus = [
dictionary.doc2bow(document)
for document
in test_documents
]
train_corpus_tfidf = list(tfidf[train_corpus])
test_corpus_tfidf = list(tfidf[test_corpus])
Explanation: Create training corpus
Let's vectorize the training corpus into the bag-of-words format. We'll train LDA on a BOW and NMFs on an TF-IDF corpus:
End of explanation
%%time
nmf = GensimNmf(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=1000,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
kappa=1,
)
W = nmf.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus_tfidf,
num_terms=W.shape[0],
)
if isinstance(nmf, SklearnNmf):
H = nmf.transform(dense_test_corpus.T).T
else:
H = np.zeros((nmf.num_topics, len(test_corpus_tfidf)))
for bow_id, bow in enumerate(test_corpus_tfidf):
for topic_id, word_count in nmf[bow]:
H[topic_id, bow_id] = word_count
np.linalg.norm(W.dot(H))
np.linalg.norm(dense_test_corpus)
Explanation: Here we simply stored the bag-of-words vectors into a list, but Gensim accepts any iterable as input, including streamed ones. To learn more about memory-efficient input iterables, see our Data Streaming in Python: Generators, Iterators, Iterables tutorial.
NMF Model Training
The API works in the same way as other Gensim models, such as LdaModel or LsiModel.
Notable model parameters:
kappa float, optional
Gradient descent step size.
Larger value makes the model train faster, but could lead to non-convergence if set too large.
w_max_iter int, optional
Maximum number of iterations to train W per each batch.
w_stop_condition float, optional
If the error difference gets smaller than this, training of W stops for the current batch.
h_r_max_iter int, optional
Maximum number of iterations to train h per each batch.
h_r_stop_condition float, optional
If the error difference gets smaller than this, training of h stops for the current batch.
Learn an NMF model with 5 topics:
End of explanation
nmf.show_topics()
Explanation: View the learned topics
End of explanation
CoherenceModel(
model=nmf,
corpus=test_corpus_tfidf,
coherence='u_mass'
).get_coherence()
Explanation: Evaluation measure: Coherence
Topic coherence measures how often do most frequent tokens from each topic co-occur in one document. Larger is better.
End of explanation
print(testset[0]['data'])
print('=' * 100)
print("Topics: {}".format(nmf[test_corpus[0]]))
Explanation: Topic inference on new documents
With the NMF model trained, let's fetch one news document not seen during training, and infer its topic vector.
End of explanation
word = dictionary[0]
print("Word: {}".format(word))
print("Topics: {}".format(nmf.get_term_topics(word)))
Explanation: Word topic inference
Similarly, we can inspect the topic distribution assigned to a vocabulary term:
End of explanation
def density(matrix):
return (matrix > 0).mean()
Explanation: Internal NMF state
Density is a fraction of non-zero elements in a matrix.
End of explanation
print("Density: {}".format(density(nmf._W)))
Explanation: Term-topic matrix of shape (words, topics).
End of explanation
print("Density: {}".format(density(nmf._h)))
Explanation: Topic-document matrix for the last batch of shape (topics, batch)
End of explanation
fixed_params = dict(
chunksize=1000,
num_topics=5,
id2word=dictionary,
passes=5,
eval_every=10,
minimum_probability=0,
random_state=0,
)
@contextmanager
def measure_ram(output, tick=5):
def _measure_ram(pid, output, tick=tick):
py = psutil.Process(pid)
with open(output, 'w') as outfile:
while True:
memory = py.memory_info().rss
outfile.write("{}\n".format(memory))
outfile.flush()
time.sleep(tick)
pid = os.getpid()
p = Process(target=_measure_ram, args=(pid, output, tick))
p.start()
yield
p.terminate()
def get_train_time_and_ram(func, name, tick=5):
memprof_filename = "{}.memprof".format(name)
start = time.time()
with measure_ram(memprof_filename, tick=tick):
result = func()
elapsed_time = pd.to_timedelta(time.time() - start, unit='s').round('ms')
memprof_df = pd.read_csv(memprof_filename, squeeze=True)
mean_ram = "{} MB".format(
int(memprof_df.mean() // 2 ** 20),
)
max_ram = "{} MB".format(int(memprof_df.max() // 2 ** 20))
return elapsed_time, mean_ram, max_ram, result
def get_f1(model, train_corpus, X_test, y_train, y_test):
if isinstance(model, SklearnNmf):
dense_train_corpus = matutils.corpus2dense(
train_corpus,
num_terms=model.components_.shape[1],
)
X_train = model.transform(dense_train_corpus.T)
else:
X_train = np.zeros((len(train_corpus), model.num_topics))
for bow_id, bow in enumerate(train_corpus):
for topic_id, word_count in model[bow]:
X_train[bow_id, topic_id] = word_count
log_reg = LogisticRegressionCV(multi_class='multinomial', cv=5)
log_reg.fit(X_train, y_train)
pred_labels = log_reg.predict(X_test)
return f1_score(y_test, pred_labels, average='micro')
def get_sklearn_topics(model, top_n=5):
topic_probas = model.components_.T
topic_probas = topic_probas / topic_probas.sum(axis=0)
sparsity = np.zeros(topic_probas.shape[1])
for row in topic_probas:
sparsity += (row == 0)
sparsity /= topic_probas.shape[1]
topic_probas = topic_probas[:, sparsity.argsort()[::-1]][:, :top_n]
token_indices = topic_probas.argsort(axis=0)[:-11:-1, :]
topic_probas.sort(axis=0)
topic_probas = topic_probas[:-11:-1, :]
topics = []
for topic_idx in range(topic_probas.shape[1]):
tokens = [
model.id2word[token_idx]
for token_idx
in token_indices[:, topic_idx]
]
topic = (
'{}*"{}"'.format(round(proba, 3), token)
for proba, token
in zip(topic_probas[:, topic_idx], tokens)
)
topic = " + ".join(topic)
topics.append((topic_idx, topic))
return topics
def get_metrics(model, test_corpus, train_corpus=None, y_train=None, y_test=None, dictionary=None):
if isinstance(model, SklearnNmf):
model.get_topics = lambda: model.components_
model.show_topics = lambda top_n: get_sklearn_topics(model, top_n)
model.id2word = dictionary
W = model.get_topics().T
dense_test_corpus = matutils.corpus2dense(
test_corpus,
num_terms=W.shape[0],
)
if isinstance(model, SklearnNmf):
H = model.transform(dense_test_corpus.T).T
else:
H = np.zeros((model.num_topics, len(test_corpus)))
for bow_id, bow in enumerate(test_corpus):
for topic_id, word_count in model[bow]:
H[topic_id, bow_id] = word_count
l2_norm = None
if not isinstance(model, LdaModel):
pred_factors = W.dot(H)
l2_norm = np.linalg.norm(pred_factors - dense_test_corpus)
l2_norm = round(l2_norm, 4)
f1 = None
if train_corpus and y_train and y_test:
f1 = get_f1(model, train_corpus, H.T, y_train, y_test)
f1 = round(f1, 4)
model.normalize = True
coherence = CoherenceModel(
model=model,
corpus=test_corpus,
coherence='u_mass'
).get_coherence()
coherence = round(coherence, 4)
topics = model.show_topics(5)
model.normalize = False
return dict(
coherence=coherence,
l2_norm=l2_norm,
f1=f1,
topics=topics,
)
Explanation: 3. Benchmarks
Gensim NMF vs Sklearn NMF vs Gensim LDA
We'll run these three unsupervised models on the 20newsgroups dataset.
20 Newsgroups also contains labels for each document, which will allow us to evaluate the trained models on an "upstream" classification task, using the unsupervised document topics as input features.
Metrics
We'll track these metrics as we train and test NMF on the 20-newsgroups corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
- f1 - F1 score on the task of news topic classification (larger is better).
End of explanation
tm_metrics = pd.DataFrame(columns=['model', 'train_time', 'coherence', 'l2_norm', 'f1', 'topics'])
y_train = [doc['target'] for doc in trainset]
y_test = [doc['target'] for doc in testset]
# LDA metrics
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(
corpus=train_corpus,
**fixed_params,
),
'lda',
0.1,
)
row.update(get_metrics(
lda, test_corpus, train_corpus, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# LSI metrics
row = {}
row['model'] = 'lsi'
row['train_time'], row['mean_ram'], row['max_ram'], lsi = get_train_time_and_ram(
lambda: LsiModel(
corpus=train_corpus_tfidf,
num_topics=5,
id2word=dictionary,
chunksize=2000,
),
'lsi',
0.1,
)
row.update(get_metrics(
lsi, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Sklearn NMF metrics
row = {}
row['model'] = 'sklearn_nmf'
train_csc_corpus_tfidf = matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: SklearnNmf(n_components=5, random_state=42).fit(train_csc_corpus_tfidf),
'sklearn_nmf',
0.1,
)
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test, dictionary,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
# Gensim NMF metrics
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], gensim_nmf = get_train_time_and_ram(
lambda: GensimNmf(
normalize=False,
corpus=train_corpus_tfidf,
**fixed_params
),
'gensim_nmf',
0.1,
)
row.update(get_metrics(
gensim_nmf, test_corpus_tfidf, train_corpus_tfidf, y_train, y_test,
))
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
tm_metrics.replace(np.nan, '-', inplace=True)
Explanation: Run the models
End of explanation
tm_metrics.drop('topics', axis=1)
Explanation: Benchmark results
End of explanation
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
Explanation: Main insights
LDA has the best coherence of all models.
LSI has the best l2 norm and f1 performance on downstream task (it's factors aren't non-negative though).
Gensim NMF, Sklearn NMF and LSI has a bit larger memory footprint than that of LDA.
Gensim NMF, Sklearn NMF and LSI are much faster than LDA.
Learned topics
Let's inspect the 5 topics learned by each of the three models:
End of explanation
# Re-import modules from scratch, so that this Section doesn't rely on any previous cells.
import itertools
import json
import logging
import time
import os
from smart_open import smart_open
import psutil
import numpy as np
import scipy.sparse
from contextlib import contextmanager, contextmanager, contextmanager
from multiprocessing import Process
from tqdm import tqdm, tqdm_notebook
import joblib
import pandas as pd
from sklearn.decomposition.nmf import NMF as SklearnNmf
import gensim.downloader
from gensim import matutils
from gensim.corpora import MmCorpus, Dictionary
from gensim.models import LdaModel, LdaMulticore, CoherenceModel
from gensim.models.nmf import Nmf as GensimNmf
from gensim.utils import simple_preprocess
tqdm.pandas()
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Subjectively, Gensim and Sklearn NMFs are on par with each other, LDA and LSI look a bit worse.
4. NMF on English Wikipedia
This section shows how to train an NMF model on a large text corpus, the entire English Wikipedia: 2.6 billion words, in 23.1 million article sections across 5 million Wikipedia articles.
The data preprocessing takes a while, and we'll be comparing multiple models, so reserve about 3 hours and some 20 GB of disk space to go through the following notebook cells in full. You'll need gensim>=3.7.1, numpy, tqdm, pandas, psutils, joblib and sklearn.
End of explanation
data = gensim.downloader.load("wiki-english-20171001")
Explanation: Load the Wikipedia dump
We'll use the gensim.downloader to download a parsed Wikipedia dump (6.1 GB disk space):
End of explanation
data = gensim.downloader.load("wiki-english-20171001")
article = next(iter(data))
print("Article: %r\n" % article['title'])
for section_title, section_text in zip(article['section_titles'], article['section_texts']):
print("Section title: %r" % section_title)
print("Section text: %s…\n" % section_text[:100].replace('\n', ' ').strip())
Explanation: Print the titles and sections of the first Wikipedia article, as a little sanity check:
End of explanation
def wikidump2tokens(articles):
Stream through the Wikipedia dump, yielding a list of tokens for each article.
for article in articles:
article_section_texts = [
" ".join([title, text])
for title, text
in zip(article['section_titles'], article['section_texts'])
]
article_tokens = simple_preprocess(" ".join(article_section_texts))
yield article_tokens
Explanation: Let's create a Python generator function that streams through the downloaded Wikipedia dump and preprocesses (tokenizes, lower-cases) each article:
End of explanation
if os.path.exists('wiki.dict'):
# If we already stored the Dictionary in a previous run, simply load it, to save time.
dictionary = Dictionary.load('wiki.dict')
else:
dictionary = Dictionary(wikidump2tokens(data))
# Keep only the 30,000 most frequent vocabulary terms, after filtering away terms
# that are too frequent/too infrequent.
dictionary.filter_extremes(no_below=5, no_above=0.5, keep_n=30000)
dictionary.save('wiki.dict')
Explanation: Create a word-to-id mapping, in order to vectorize texts. Makes a full pass over the Wikipedia corpus, takes ~3.5 hours:
End of explanation
vector_stream = (dictionary.doc2bow(article) for article in wikidump2tokens(data))
Explanation: Store preprocessed Wikipedia as bag-of-words sparse matrix in MatrixMarket format
When training NMF with a single pass over the input corpus ("online"), we simply vectorize each raw text straight from the input storage:
End of explanation
class RandomSplitCorpus(MmCorpus):
Use the fact that MmCorpus supports random indexing, and create a streamed
corpus in shuffled order, including a train/test split for evaluation.
def __init__(self, random_seed=42, testset=False, testsize=1000, *args, **kwargs):
super().__init__(*args, **kwargs)
random_state = np.random.RandomState(random_seed)
self.indices = random_state.permutation(range(self.num_docs))
test_nnz = sum(len(self[doc_idx]) for doc_idx in self.indices[:testsize])
if testset:
self.indices = self.indices[:testsize]
self.num_docs = testsize
self.num_nnz = test_nnz
else:
self.indices = self.indices[testsize:]
self.num_docs -= testsize
self.num_nnz -= test_nnz
def __iter__(self):
for doc_id in self.indices:
yield self[doc_id]
if not os.path.exists('wiki.mm'):
MmCorpus.serialize('wiki.mm', vector_stream, progress_cnt=100000)
if not os.path.exists('wiki_tfidf.mm'):
MmCorpus.serialize('wiki_tfidf.mm', tfidf[MmCorpus('wiki.mm')], progress_cnt=100000)
# Load back the vectors as two lazily-streamed train/test iterables.
train_corpus = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki.mm',
)
test_corpus = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki.mm',
)
train_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=False, testsize=10000, fname='wiki_tfidf.mm',
)
test_corpus_tfidf = RandomSplitCorpus(
random_seed=42, testset=True, testsize=10000, fname='wiki_tfidf.mm',
)
Explanation: For the purposes of this tutorial though, we'll serialize ("cache") the vectorized bag-of-words vectors to disk, to wiki.mm file in MatrixMarket format. The reason is, we'll be re-using the vectorized articles multiple times, for different models for our benchmarks, and also shuffling them, so it makes sense to amortize the vectorization time by persisting the resulting vectors to disk.
So, let's stream through the preprocessed sparse Wikipedia bag-of-words matrix while storing it to disk. This step takes about 3 hours and needs 38 GB of disk space:
End of explanation
if not os.path.exists('wiki_train_csr.npz'):
scipy.sparse.save_npz(
'wiki_train_csr.npz',
matutils.corpus2csc(train_corpus_tfidf, len(dictionary)).T,
)
Explanation: Save preprocessed Wikipedia in scipy.sparse format
This is only needed to run the Sklearn NMF on Wikipedia, for comparison in the benchmarks below. Sklearn expects in-memory scipy sparse input, not on-the-fly vector streams. Needs additional ~2 GB of disk space.
Skip this step if you don't need the Sklearn's NMF benchmark, and only want to run Gensim's NMF.
End of explanation
tm_metrics = pd.DataFrame(columns=[
'model', 'train_time', 'mean_ram', 'max_ram', 'coherence', 'l2_norm', 'topics',
])
Explanation: Metrics
We'll track these metrics as we train and test NMF on the Wikipedia corpus we created above:
- train time - time to train a model
- mean_ram - mean RAM consumption during training
- max_ram - maximum RAM consumption during training
- train time - time to train a model.
- coherence - coherence score (larger is better).
- l2_norm - L2 norm of v - Wh (less is better, not defined for LDA).
Define a dataframe in which we'll store the recorded metrics:
End of explanation
params = dict(
chunksize=2000,
num_topics=50,
id2word=dictionary,
passes=1,
eval_every=10,
minimum_probability=0,
random_state=42,
)
Explanation: Define common parameters, to be shared by all evaluated models:
End of explanation
row = {}
row['model'] = 'gensim_nmf'
row['train_time'], row['mean_ram'], row['max_ram'], nmf = get_train_time_and_ram(
lambda: GensimNmf(normalize=False, corpus=train_corpus_tfidf, **params),
'gensim_nmf',
1,
)
print(row)
nmf.save('gensim_nmf.model')
nmf = GensimNmf.load('gensim_nmf.model')
row.update(get_metrics(nmf, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Gensim NMF model and record its metrics
Wikipedia training
Train Gensim NMF model and record its metrics
End of explanation
row = {}
row['model'] = 'lsi'
row['train_time'], row['mean_ram'], row['max_ram'], lsi = get_train_time_and_ram(
lambda: LsiModel(
corpus=train_corpus_tfidf,
chunksize=2000,
num_topics=50,
id2word=dictionary,
),
'lsi',
1,
)
print(row)
lsi.save('lsi.model')
lsi = LsiModel.load('lsi.model')
row.update(get_metrics(lsi, test_corpus_tfidf))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Gensim LSI model and record its metrics
End of explanation
row = {}
row['model'] = 'lda'
row['train_time'], row['mean_ram'], row['max_ram'], lda = get_train_time_and_ram(
lambda: LdaModel(corpus=train_corpus, **params),
'lda',
1,
)
print(row)
lda.save('lda.model')
lda = LdaModel.load('lda.model')
row.update(get_metrics(lda, test_corpus))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Gensim LDA and record its metrics
End of explanation
row = {}
row['model'] = 'sklearn_nmf'
sklearn_nmf = SklearnNmf(n_components=50, tol=1e-2, random_state=42)
row['train_time'], row['mean_ram'], row['max_ram'], sklearn_nmf = get_train_time_and_ram(
lambda: sklearn_nmf.fit(scipy.sparse.load_npz('wiki_train_csr.npz')),
'sklearn_nmf',
10,
)
print(row)
joblib.dump(sklearn_nmf, 'sklearn_nmf.joblib')
sklearn_nmf = joblib.load('sklearn_nmf.joblib')
row.update(get_metrics(
sklearn_nmf, test_corpus_tfidf, dictionary=dictionary,
))
print(row)
tm_metrics = tm_metrics.append(pd.Series(row), ignore_index=True)
Explanation: Train Sklearn NMF and record its metrics
Careful! Sklearn loads the entire input Wikipedia matrix into RAM. Even though the matrix is sparse, you'll need FIXME GB of free RAM to run the cell below.
End of explanation
tm_metrics.replace(np.nan, '-', inplace=True)
tm_metrics.drop(['topics', 'f1'], axis=1)
Explanation: Wikipedia results
End of explanation
def compare_topics(tm_metrics):
for _, row in tm_metrics.iterrows():
print('\n{}:'.format(row.model))
print("\n".join(str(topic) for topic in row.topics))
compare_topics(tm_metrics)
Explanation: Insights
Gensim's online NMF outperforms all other models in terms of speed and memory foorprint size.
Compared to Sklearn's NMF:
2x faster.
Uses ~20x less memory.
About 8GB of Sklearn's RAM comes from the in-memory input matrices, which, in contrast to Gensim NMF, cannot be streamed iteratively. But even if we forget about the huge input size, Sklearn NMF uses about 2-8 GB of RAM – significantly more than Gensim NMF or LDA.
L2 norm and coherence are a bit worse.
Compared to Gensim's LSI:
3x faster
Better coherence but slightly worse l2 norm.
Compared to Gensim's LDA, Gensim NMF also gives superior results:
3x faster
Coherence is worse than LDA's.
Learned Wikipedia topics
End of explanation
import logging
import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from numpy.random import RandomState
from sklearn import decomposition
from sklearn.cluster import MiniBatchKMeans
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition.nmf import NMF as SklearnNmf
from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import f1_score
from sklearn.model_selection import ParameterGrid
import gensim.downloader
from gensim import matutils
from gensim.corpora import Dictionary
from gensim.models import CoherenceModel, LdaModel, LdaMulticore
from gensim.models.nmf import Nmf as GensimNmf
from gensim.parsing.preprocessing import preprocess_string
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
from sklearn.base import BaseEstimator, TransformerMixin
import scipy.sparse as sparse
class NmfWrapper(BaseEstimator, TransformerMixin):
def __init__(self, bow_matrix, **kwargs):
self.corpus = sparse.csc.csc_matrix(bow_matrix)
self.nmf = GensimNmf(**kwargs)
def fit(self, X):
self.nmf.update(self.corpus)
@property
def components_(self):
return self.nmf.get_topics()
Explanation: It seems all four models successfully learned useful topics from the Wikipedia corpus.
5. And now for something completely different: Face decomposition from images
The NMF algorithm in Gensim is optimized for extremely large (sparse) text corpora, but it will also work on vectors from other domains!
Let's compare our model to other factorization algorithms on dense image vectors and check out the results.
To do that we'll patch sklearn's Faces Dataset Decomposition.
Sklearn wrapper
Let's create an Scikit-learn wrapper in order to run Gensim NMF on images.
End of explanation
gensim.models.nmf.logger.propagate = False
============================
Faces dataset decompositions
============================
This example applies to :ref:`olivetti_faces` different unsupervised
matrix decomposition (dimension reduction) methods from the module
:py:mod:`sklearn.decomposition` (see the documentation chapter
:ref:`decompositions`) .
print(__doc__)
# Authors: Vlad Niculae, Alexandre Gramfort
# License: BSD 3 claus
n_row, n_col = 2, 3
n_components = n_row * n_col
image_shape = (64, 64)
rng = RandomState(0)
# #############################################################################
# Load faces data
dataset = fetch_olivetti_faces(shuffle=True, random_state=rng)
faces = dataset.data
n_samples, n_features = faces.shape
# global centering
faces_centered = faces - faces.mean(axis=0)
# local centering
faces_centered -= faces_centered.mean(axis=1).reshape(n_samples, -1)
print("Dataset consists of %d faces" % n_samples)
def plot_gallery(title, images, n_col=n_col, n_row=n_row):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray,
interpolation='nearest',
vmin=-vmax, vmax=vmax)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.)
# #############################################################################
# List of the different estimators, whether to center and transpose the
# problem, and whether the transformer uses the clustering API.
estimators = [
('Eigenfaces - PCA using randomized SVD',
decomposition.PCA(n_components=n_components, svd_solver='randomized',
whiten=True),
True),
('Non-negative components - NMF (Sklearn)',
decomposition.NMF(n_components=n_components, init='nndsvda', tol=5e-3),
False),
('Non-negative components - NMF (Gensim)',
NmfWrapper(
bow_matrix=faces.T,
chunksize=3,
eval_every=400,
passes=2,
id2word={idx: idx for idx in range(faces.shape[1])},
num_topics=n_components,
minimum_probability=0,
random_state=42,
),
False),
('Independent components - FastICA',
decomposition.FastICA(n_components=n_components, whiten=True),
True),
('Sparse comp. - MiniBatchSparsePCA',
decomposition.MiniBatchSparsePCA(n_components=n_components, alpha=0.8,
n_iter=100, batch_size=3,
random_state=rng),
True),
('MiniBatchDictionaryLearning',
decomposition.MiniBatchDictionaryLearning(n_components=15, alpha=0.1,
n_iter=50, batch_size=3,
random_state=rng),
True),
('Cluster centers - MiniBatchKMeans',
MiniBatchKMeans(n_clusters=n_components, tol=1e-3, batch_size=20,
max_iter=50, random_state=rng),
True),
('Factor Analysis components - FA',
decomposition.FactorAnalysis(n_components=n_components, max_iter=2),
True),
]
# #############################################################################
# Plot a sample of the input data
plot_gallery("First centered Olivetti faces", faces_centered[:n_components])
# #############################################################################
# Do the estimation and plot it
for name, estimator, center in estimators:
print("Extracting the top %d %s..." % (n_components, name))
t0 = time.time()
data = faces
if center:
data = faces_centered
estimator.fit(data)
train_time = (time.time() - t0)
print("done in %0.3fs" % train_time)
if hasattr(estimator, 'cluster_centers_'):
components_ = estimator.cluster_centers_
else:
components_ = estimator.components_
# Plot an image representing the pixelwise variance provided by the
# estimator e.g its noise_variance_ attribute. The Eigenfaces estimator,
# via the PCA decomposition, also provides a scalar noise_variance_
# (the mean of pixelwise variance) that cannot be displayed as an image
# so we skip it.
if (hasattr(estimator, 'noise_variance_') and
estimator.noise_variance_.ndim > 0): # Skip the Eigenfaces case
plot_gallery("Pixelwise variance",
estimator.noise_variance_.reshape(1, -1), n_col=1,
n_row=1)
plot_gallery('%s - Train time %.1fs' % (name, train_time),
components_[:n_components])
plt.show()
Explanation: Modified face decomposition notebook
Adapted from the excellent Scikit-learn tutorial (BSD license):
Turn off the logger due to large number of info messages during training
End of explanation |
14,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Multiple Wells
This notebook shows how a WellModel can be used to fit multiple wells with one response function. The influence of the individual wells is scaled by the distance to the observation point.
Developed by R.C. Caljé, (Artesia Water 2020), D.A. Brakenhoff, (Artesia Water 2019), and R.A. Collenteur, (Artesia Water 2018)
Step1: Load data from a Menyanthes file
Menyanthes is timeseries analysis software used by many people in the Netherlands. In this example a Menyanthes-file with one observation-series is imported, and simulated. There are several stresses in the Menyanthes-file, among which are three groundwater extractions with a significant influence on groundwater head.
Import the Menyanthes-file with observations and stresses.
Step2: Get the distances of the extractions to the observation well. Extraction 1 is about two times as far from the observation well as extraction 2 and 3. We will use this information later in our WellModel.
Step3: Then plot the observations, together with the diferent stresses in the Menyanthes file.
Step4: Create a model with a separate StressModel for each extraction
First we create a model with a separate StressModel for each groundwater extraction. First we create a model with the heads timeseries and add recharge as a stress.
Step5: Get the precipitation and evaporation timeseries and round the index to remove the hours from the timestamps.
Step6: Create a recharge stressmodel and add to the model.
Step7: Get the extraction timeseries.
Step8: Add each of the extractions as a separate StressModel.
Step9: Solve the model.
Note the use of ps.LmfitSolve. This is because of an issue concerning optimization with small parameter values in scipy.least_squares. This is something that may influence models containing a WellModel (which we will be creating later) and since we want to keep the models in this Notebook as similar as possible, we're also using ps.LmfitSolve here.
Step10: Visualize the results
Plot the decomposition to see the individual influence of each of the wells.
Step11: We can calculate the gain of each extraction (quantified as the effect on the groundwater level of a continuous extraction of ~1 Mm$^3$/yr).
Step12: Create a model with a WellModel
We can reduce the number of parameters in the model by including the three extractions in a WellModel. This WellModel takes into account the distances from the three extractions to the observation well, and assumes constant geohydrological properties. All of the extractions now share the same response function, scaled by the distance between the extraction well and the observation well.
First we create a new model and add recharge.
Step13: We have all the information we need to create a WellModel
Step14: Solve the model.
We are once again using ps.LmfitSolve. The user is notified about the preference for this solver in a WARNING when creating the WellModel (see above).
As we can see, the fit with the measurements (EVP) is similar to the result with the previous model, with each well included separately.
Step15: Visualize the results
Plot the decomposition to see the individual influence of each of the wells
Step16: Plot the stacked influence of each of the individual extraction wells in the results plot
Step17: Get parameters for each well (including the distance) and calculate the gain. The WellModel reorders the stresses from closest to the observation well, to furthest from the observation well. We have take this into account during the post-processing.
The gain of extraction 1 is lower than the gain of extraction 2 and 3. This will always be the case in a WellModel when the distance from the observation well to extraction 1 is larger than the distance to extraction 2 and 3.
Step18: Compare individual StressModels and WellModel
Compare the gains that were calculated by the individual StressModels and the WellModel. | Python Code:
import numpy as np
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
Explanation: Adding Multiple Wells
This notebook shows how a WellModel can be used to fit multiple wells with one response function. The influence of the individual wells is scaled by the distance to the observation point.
Developed by R.C. Caljé, (Artesia Water 2020), D.A. Brakenhoff, (Artesia Water 2019), and R.A. Collenteur, (Artesia Water 2018)
End of explanation
fname = '../data/MenyanthesTest.men'
meny = ps.read.MenyData(fname)
Explanation: Load data from a Menyanthes file
Menyanthes is timeseries analysis software used by many people in the Netherlands. In this example a Menyanthes-file with one observation-series is imported, and simulated. There are several stresses in the Menyanthes-file, among which are three groundwater extractions with a significant influence on groundwater head.
Import the Menyanthes-file with observations and stresses.
End of explanation
# Get distances from metadata
xo = meny.H["Obsevation well"]['xcoord']
yo = meny.H["Obsevation well"]['ycoord']
distances = []
extraction_names = ['Extraction 2', 'Extraction 3']
for extr in extraction_names:
xw = meny.IN[extr]["xcoord"]
yw = meny.IN[extr]["ycoord"]
distances.append(np.sqrt((xo-xw)**2 + (yo-yw)**2))
extraction_names = [name.replace(" ", "_") for name in extraction_names] # replace spaces in names for Pastas
df = pd.DataFrame(distances, index=extraction_names, columns=['Distance to observation well'])
df
Explanation: Get the distances of the extractions to the observation well. Extraction 1 is about two times as far from the observation well as extraction 2 and 3. We will use this information later in our WellModel.
End of explanation
# plot timeseries
f1, axarr = plt.subplots(len(meny.IN)+1, sharex=True, figsize=(10,8))
oseries = meny.H['Obsevation well']["values"]
oseries.plot(ax=axarr[0], color='k')
axarr[0].set_title(meny.H['Obsevation well']["Name"])
for i, (name, data) in enumerate(meny.IN.items(), start=1):
data["values"].plot(ax=axarr[i])
axarr[i].set_title(name)
plt.tight_layout(pad=0)
Explanation: Then plot the observations, together with the diferent stresses in the Menyanthes file.
End of explanation
oseries = ps.TimeSeries(meny.H['Obsevation well']['values'].dropna(), name="heads", settings="oseries")
# create model
ml = ps.Model(oseries)
Explanation: Create a model with a separate StressModel for each extraction
First we create a model with a separate StressModel for each groundwater extraction. First we create a model with the heads timeseries and add recharge as a stress.
End of explanation
prec = meny.IN['Precipitation']['values']
prec.index = prec.index.round("D")
prec.name = "prec"
evap = meny.IN['Evaporation']['values']
evap.index = evap.index.round("D")
evap.name = "evap"
Explanation: Get the precipitation and evaporation timeseries and round the index to remove the hours from the timestamps.
End of explanation
rm = ps.RechargeModel(prec, evap, ps.Exponential, 'Recharge')
ml.add_stressmodel(rm)
Explanation: Create a recharge stressmodel and add to the model.
End of explanation
stresses = []
for name in extraction_names:
# get extraction timeseries
s = meny.IN[name.replace("_", " ")]['values']
# convert index to end-of-month timeseries
s.index = s.index.to_period("M").to_timestamp("M")
# resample to daily values
s_daily = ps.utils.timestep_weighted_resample_fast(s, "D")
# create pastas.TimeSeries object
stress = ps.TimeSeries(s_daily.dropna(), name=name, settings="well")
# append to stresses list
stresses.append(stress)
Explanation: Get the extraction timeseries.
End of explanation
for stress in stresses:
sm = ps.StressModel(stress, ps.Hantush, stress.name, up=False)
ml.add_stressmodel(sm)
Explanation: Add each of the extractions as a separate StressModel.
End of explanation
ml.solve(solver=ps.LmfitSolve)
Explanation: Solve the model.
Note the use of ps.LmfitSolve. This is because of an issue concerning optimization with small parameter values in scipy.least_squares. This is something that may influence models containing a WellModel (which we will be creating later) and since we want to keep the models in this Notebook as similar as possible, we're also using ps.LmfitSolve here.
End of explanation
ml.plots.decomposition();
Explanation: Visualize the results
Plot the decomposition to see the individual influence of each of the wells.
End of explanation
for i in range(len(extraction_names)):
name = extraction_names[i]
sm = ml.stressmodels[name]
p = ml.get_parameters(name)
gain = sm.rfunc.gain(p) * 1e6 / 365.25
print(f"{name}: gain = {gain:.3f} m / Mm^3/year")
df.at[name, 'gain StressModel'] = gain
Explanation: We can calculate the gain of each extraction (quantified as the effect on the groundwater level of a continuous extraction of ~1 Mm$^3$/yr).
End of explanation
ml_wm = ps.Model(oseries, oseries.name + "_wm")
rm = ps.RechargeModel(prec, evap, ps.Gamma, 'Recharge')
ml_wm.add_stressmodel(rm)
Explanation: Create a model with a WellModel
We can reduce the number of parameters in the model by including the three extractions in a WellModel. This WellModel takes into account the distances from the three extractions to the observation well, and assumes constant geohydrological properties. All of the extractions now share the same response function, scaled by the distance between the extraction well and the observation well.
First we create a new model and add recharge.
End of explanation
w = ps.WellModel(stresses, ps.HantushWellModel, "Wells", distances, settings="well")
ml_wm.add_stressmodel(w)
Explanation: We have all the information we need to create a WellModel:
- timeseries for each of the extractions, these are passed as a list of stresses
- distances from each extraction to the observation point, note that the order of these distances must correspond to the order of the stresses.
Note: the WellModel only works with a special version of the Hantush response function called HantushWellModel. This is because the response function must support scaling by a distance $r$. The HantushWellModel response function has been modified to support this. The Hantush response normally takes three parameters: the gain $A$, $a$ and $b$. This special version accepts 4 parameters: it interprets that fourth parameter as the distance $r$, and uses it to scale the parameters accordingly.
Create the WellModel and add to the model.
End of explanation
ml_wm.solve(solver=ps.LmfitSolve)
Explanation: Solve the model.
We are once again using ps.LmfitSolve. The user is notified about the preference for this solver in a WARNING when creating the WellModel (see above).
As we can see, the fit with the measurements (EVP) is similar to the result with the previous model, with each well included separately.
End of explanation
ml_wm.plots.decomposition();
Explanation: Visualize the results
Plot the decomposition to see the individual influence of each of the wells
End of explanation
ml_wm.plots.stacked_results(figsize=(10, 8));
Explanation: Plot the stacked influence of each of the individual extraction wells in the results plot
End of explanation
wm = ml_wm.stressmodels["Wells"]
for i in range(len(extraction_names)):
# get parameters
p = wm.get_parameters(model=ml_wm, istress=i)
# calculate gain
gain = wm.rfunc.gain(p) * 1e6 / 365.25
name = wm.stress[i].name
print(f"{name}: gain = {gain:.3f} m / Mm^3/year")
df.at[name, 'gain WellModel'] = gain
Explanation: Get parameters for each well (including the distance) and calculate the gain. The WellModel reorders the stresses from closest to the observation well, to furthest from the observation well. We have take this into account during the post-processing.
The gain of extraction 1 is lower than the gain of extraction 2 and 3. This will always be the case in a WellModel when the distance from the observation well to extraction 1 is larger than the distance to extraction 2 and 3.
End of explanation
df.style.format("{:.4f}")
Explanation: Compare individual StressModels and WellModel
Compare the gains that were calculated by the individual StressModels and the WellModel.
End of explanation |
14,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 1
Step1: Load and prepare data
Step2: Here's the full dataset, and there are other columns. I will subselect a few of them by hand.
Step5: I will define the following functions to expedite the LOO risk and the Empirical risk.
Step6: As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions.
Nearest neighbor regression
Use the method described here
Step7: Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint
Step8: I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression.
Exercise 2 Do the same but for the reduced predictor variables below... | Python Code:
# Import the necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import LeaveOneOut
from sklearn import linear_model, neighbors
%matplotlib inline
plt.style.use('ggplot')
# Where to save the figures
PROJECT_ROOT_DIR = ".."
datapath = PROJECT_ROOT_DIR + "/data/lifesat/"
plt.rcParams["figure.figsize"] = (8,6)
Explanation: Lab 1: Nearest Neighbor Regression and Overfitting
This is based on the notebook file 01 in Aurélien Geron's github page
End of explanation
# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI
oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',')
oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"]
oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value")
oecd_bli.columns
oecd_bli["Life satisfaction"].head()
# Load and prepare GDP per capita data
# Download data from http://goo.gl/j1MSKe (=> imf.org)
gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t',
encoding='latin1', na_values="n/a")
gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True)
gdp_per_capita.set_index("Country", inplace=True)
full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True)
full_country_stats.sort_values(by="GDP per capita", inplace=True)
_ = full_country_stats.plot("GDP per capita",'Life satisfaction',kind='scatter')
Explanation: Load and prepare data
End of explanation
xvars = ['Self-reported health','Water quality','Quality of support network','GDP per capita']
X = np.array(full_country_stats[xvars])
y = np.array(full_country_stats['Life satisfaction'])
Explanation: Here's the full dataset, and there are other columns. I will subselect a few of them by hand.
End of explanation
def loo_risk(X,y,regmod):
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
lin1 = linear_model.LinearRegression(fit_intercept=False)
print('LOO Risk: '+ str(loo_risk(X,y,lin1)))
print('Emp Risk: ' + str(emp_risk(X,y,lin1)))
Explanation: I will define the following functions to expedite the LOO risk and the Empirical risk.
End of explanation
# knn = neighbors.KNeighborsRegressor(n_neighbors=5)
Explanation: As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions.
Nearest neighbor regression
Use the method described here: http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html
I have already imported the necessary module, so you just need to use the regression object (like we used LinearRegression)
End of explanation
LOOs = []
MSEs = []
K=30
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X,y,knn))
MSEs.append(emp_risk(X,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
Explanation: Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint: use the previously defined functions)
End of explanation
X1 = np.array(full_country_stats[['Self-reported health']])
LOOs = []
MSEs = []
K=30
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X1,y,knn))
MSEs.append(emp_risk(X1,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
Explanation: I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression.
Exercise 2 Do the same but for the reduced predictor variables below...
End of explanation |
14,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
These cells are used to pre-process the data.
They only need to be run once, and after that the saved data file can be loaded up from disk.
Step1: Load the data
Step2: Generate regularization parameter files & tune them
Since we don't have any existing regularization parameter files for ESPRESSO, we have to make some new ones.
This is needed because the default wobble regularization is tuned to HARPS, which has a different number of spectral orders and different wavelength coverage - if we try to run with those files, the optimization will (a) be non-optimal and (b) eventually crash when we try to access an order than does not exist for HARPS.
Step3: We'll tune the regularization using a train-and-validate approach, so let's set aside some epochs to be the validation set
Step4: Here's an example of how this regularization tuning will go for one order | Python Code:
data = wobble.Data()
filenames = glob.glob('/Users/mbedell/python/wobble/data/toi/TOI-*_CCF_A.fits')
for filename in tqdm(filenames):
try:
sp = wobble.Spectrum()
sp.from_ESPRESSO(filename, process=True)
data.append(sp)
except Exception as e:
print("File {0} failed; error: {1}".format(filename, e))
data.write('../data/toi.hdf5')
Explanation: These cells are used to pre-process the data.
They only need to be run once, and after that the saved data file can be loaded up from disk.
End of explanation
data = wobble.Data(filename='../data/toi.hdf5')
R = np.copy(data.R) # we'll need this later
data
data.drop_bad_orders(min_snr=3)
data.drop_bad_epochs(min_snr=3)
data.orders
r = 0
good = data.ivars[r] > 0.
for e in [0,10,20]:
plt.errorbar(data.xs[r][e][good[e]], data.ys[r][e][good[e]],
1./np.sqrt(data.ivars[r][e][good[e]]), ls='', fmt='o', ms=2, alpha=0.5)
plt.title('Echelle order #{0}'.format(data.orders[r]), fontsize=14);
Explanation: Load the data
End of explanation
star_filename = '../wobble/regularization/toi_star.hdf5'
tellurics_filename = '../wobble/regularization/toi_tellurics.hdf5'
wobble.generate_regularization_file(star_filename, R, type='star')
wobble.generate_regularization_file(tellurics_filename, R, type='telluric')
plot_dir = '../regularization/toi/'
if not os.path.exists(plot_dir):
os.makedirs(plot_dir)
Explanation: Generate regularization parameter files & tune them
Since we don't have any existing regularization parameter files for ESPRESSO, we have to make some new ones.
This is needed because the default wobble regularization is tuned to HARPS, which has a different number of spectral orders and different wavelength coverage - if we try to run with those files, the optimization will (a) be non-optimal and (b) eventually crash when we try to access an order than does not exist for HARPS.
End of explanation
validation_epochs = np.random.choice(data.N, data.N//6, replace=False) # 3 epochs for validation set
r = 100
for e in [validation_epochs[0]]:
plt.errorbar(data.xs[r][e][good[e]], data.ys[r][e][good[e]],
1./np.sqrt(data.ivars[r][e][good[e]]), ls='', fmt='o', ms=2, alpha=0.5)
Explanation: We'll tune the regularization using a train-and-validate approach, so let's set aside some epochs to be the validation set:
End of explanation
r = 100
o = data.orders[r]
objs = wobble.setup_for_order(r, data, validation_epochs)
wobble.improve_order_regularization(o, star_filename, tellurics_filename,
*objs,
verbose=False, plot=False,
basename='{0}o{1}'.format(plot_dir, o),
K_t=0, L1=True, L2=True)
Explanation: Here's an example of how this regularization tuning will go for one order:
End of explanation |
14,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W2 Lab Assignment
Internet Movie Database (IMDb) provides various information about movies, such as total budgets, lengths, actors, and user ratings. They are publicly available from here. In this lab, let's explore a processed dataset named 'imdb.csv', which contains some basic information of movies.
Download the file from Canvas. There are 4 columns separated by tab
Step1: There are many ways to do Q1. One way is to use dictionaries where the key
Step2: Python automates the job above by using Counter.
Step3: Once all lines are read, we want to print the dictionary, which can be done by iterating its key
Step4: You can get the keys (the years) by using .keys() function.
Step5: and you have convenient functions like min() and max() for calculating the min and max value of a list or iterable.
Step6: Code for Q1
Step7: Q2
Step8: Code for Q2
Step9: Q3
Step10: Code for Q3 | Python Code:
import csv
from itertools import islice
f = open('imdb.csv', 'r')
reader = csv.reader(f, delimiter='\t')
for row in islice(reader, 0, 5):
print(row)
print(row[1])
Explanation: W2 Lab Assignment
Internet Movie Database (IMDb) provides various information about movies, such as total budgets, lengths, actors, and user ratings. They are publicly available from here. In this lab, let's explore a processed dataset named 'imdb.csv', which contains some basic information of movies.
Download the file from Canvas. There are 4 columns separated by tab:
Title: title of the movie;
Year: release year;
Rating: average IMDb user rating;
Votes: number of IMDB users who rated this movie
First, we want to get some insights from the data with Python; then we want to display information on a web page and prettify it with html/css.
Things to note:
Let's use Python 3.5;
There are 313,012 lines in the file. When printing things, print selectively.
Part 1. Data manipulation with Python
Q1: What is the first and last year in this dataset? How many movies released in each year?
To do this, we first need to read the CSV file. Python provides the csv module to read and write CSV files. The csv.reader function returns a Python object which will iterate over lines in the given file. Each line is returned as a list of strings, so that we can access a particular column using list index. If we want to ignore the first line, we can use islice. It is like slicing a list, but it can slice an iterator (e.g. file stream). For instance, islice(reader, 0, 5) means "give me the first 5 items from the reader". islice(reader, 1, 5) means "give me the 4 items starting from the second item".
A basic usage example to read the first 11 lines of 'imdb.csv':
End of explanation
dt = {}
year = 1972
if year not in dt:
dt[year] = 1
else:
dt[year] += 1
print(dt)
Explanation: There are many ways to do Q1. One way is to use dictionaries where the key: value pairs are:
key: year
value: a list of movie titles or number of movies
End of explanation
from collections import Counter
movie_counter = Counter()
movie_counter[1972] +=1
print(movie_counter[1972])
print(movie_counter[1970])
Explanation: Python automates the job above by using Counter.
End of explanation
for key,val in dt.items():
print(key,val)
for key,val in movie_counter.items():
print(key,val)
Explanation: Once all lines are read, we want to print the dictionary, which can be done by iterating its key: value pairs.
End of explanation
movie_counter[1980] += 5
movie_counter[2015] += 1
movie_counter.keys()
Explanation: You can get the keys (the years) by using .keys() function.
End of explanation
alist = [23,3,5,4,2,1,1,0,1000]
print(min(alist))
print(max(alist))
Explanation: and you have convenient functions like min() and max() for calculating the min and max value of a list or iterable.
End of explanation
import pandas as pd
imdb = pd.read_csv('imdb.csv', delimiter='\t')
imdb.head()
min(imdb['Year'])
max(imdb['Year'])
from collections import Counter
Counter(imdb["Year"])
Explanation: Code for Q1
End of explanation
import numpy as np
alist = [1,3,6,2,5,2]
print(np.mean(alist))
print(np.median(alist))
Explanation: Q2: What is the average ratings/votes?
We can store the ratings/votes column as a list and then calculate various basic statistics (mean, median, etc.). To do this, we can use the NumPy library and call the function numpy.mean and numpy.median. For example,
End of explanation
# implement below
imdb['Rating'].mean()
imdb['Votes'].mean()
Explanation: Code for Q2
End of explanation
import operator
dt = {1971: 2, 1975: 10, 1962: 1, 1980: 50, 1981: 55}
sorted_x_by_val = sorted(dt.items(), key=operator.itemgetter(1), reverse=True )
print(sorted_x_by_val)
for elem in sorted_x_by_val:
print(elem[0],elem[1])
Explanation: Q3: What are the 5 movies that have the highest ratings/votes?
Store the movie titles and ratings information as a dictonary:
key: movie title
value: movie rating
Then, we can sort the dictionary based on its values, which will return a list of tuples. Note to print only the top 5 movies.
End of explanation
# implement below
import warnings
warnings.filterwarnings('ignore')
imdb.sort_index(by=['Rating'], ascending=[False]).head()
imdb.sort_index(by=['Votes'], ascending=[False]).head()
Explanation: Code for Q3
End of explanation |
14,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
并发编程
不管在Python或者Java甚至js中都需要并发编程的存在.并发编程的目的是为了尽可能的使用机器的资源, 以达到更高的单机性能, 进而提升性价比.
这里有2个概念.
* 并发
Step1: 在以上代码中, 我们创建了2个线程分别根据参数进行打印操作. 思考为什么一个打印完了之后才打印另一个?
Master/Worker形式
在通常开发中, 一般很少直接将某一部分业务直接分配给一个方法. 更通用的做法是主线程进行分发任务, worker根据master给出的任务执行对应的工作.
Step2: 在以上例子中, 我们通过在主线程中往队列中发送"任务"来达到分配任务的目的.
而worker中通过从队列中获取数据, 并根据任务数据执行相应的任务.
在这个过程中, worker只要一做完当前工作, 就会在队列处等待新任务的到来.
这样就可以确保任务能被第一时间消化, 工作非常的饱和. 自然性价比就出来了.
多线程中的竞态问题
在多线程中, 如果多个线程同时对一个资源进行操作, 那后果将是灾难性的. 思考以下例子, 请思考输出和输出该值的原因的是什么
Step3: 同步锁机制
解决以上问题的方法就是增加一个锁, 在修改时防止其他线程修改. 确保同一时刻只有一个线程在操作共享变量.
Step4: 请思考以上还有可能出现什么问题?正确的做法是什么?
多进程
多进程与多线程基本类似, 不同的是, 多进程是以自身为蓝本创建一个新的进程, 并且开辟一片新的内存空间.需要注意的点有以下几点.
* 多进程会采用写时复制技术来降低创建进程带来的内存拷贝开销, 共享变量只有在修改时才会进行拷贝
* 多进程的开销会比多线程来的更大, 但是可以实现更好的并发效果
* 多进程编程中不需要考虑变量之间的同步关系, 但是还是需要注意进程之外的同步问题(文件读写)
* 多进程中一般使用临界区, 互斥量, 信号量或者事件来进行同步操作
python中使用multiprocessing库来支持多进程编程
Step5: 写时复制
写入时复制是一种计算机程序设计领域的优化策略。其核心思想是,如果有多个调用者同时请求相同资源(如内存或磁盘上的数据存储),他们会共同获取相同的指针指向相同的资源,直到某个调用者试图修改资源的内容时,系统才会真正复制一份专用副本(private copy)给该调用者,而其他调用者所见到的最初的资源仍然保持不变。这个过程对其他的调用者是透明的. 我们来验证一下
Step6: 从以上程序的输出, 我们可以看到, 操作系统在我们修改变量的时候才会拷贝这个变量.
同时需要注意到, 我们针对全局变量的修改会被全部隔离开.
多进程中的锁
多进程中的加锁方式与多线程中加锁方式, 在代码上并无区别, 但是在实现原理上却并不一样. 思考下多进程加锁如何实现?
Step7: 协程
协程与多线程多进程的原理完全不用, 协程使用过在一个线程内尽可能的执行更多的指令以达到并发处理的能力.相对比与多线程与多线程
* 协程更加轻量化, 一个协程就是一个函数, 在协程之间切换只需要切换栈空间和寄存器内容即可.
* 协程只适用于IO密集型任务, 不适用于计算密集型任务.
* 协程能最大化利用单核性能, 但是并不能最大化利用多核性能.(新的方向是同时使用多进程和协程)
python中为协程提供支持的是asyncio这个库. 我们这边只做简单介绍.
Step8: 以下使用httpx库配合使用asyncio来实现快速的抓取网页
Step9: asyncio原理
Python的asyncio实现的原理就是对generator的极致应用.
* 使用yield交出控制权
* 使用调度器调度所有的协程(生成器)
以下我们实现个简单的自己的asyncio | Python Code:
import time
# 引入多线程库
import threading
def say_hello(name):
for i in range(10):
print("hello {}".format(name))
thread1 = threading.Thread(target=say_hello, args=('small red',))
thread2 = threading.Thread(target=say_hello, args=('small light',))
thread1.start()
thread2.start()
Explanation: 并发编程
不管在Python或者Java甚至js中都需要并发编程的存在.并发编程的目的是为了尽可能的使用机器的资源, 以达到更高的单机性能, 进而提升性价比.
这里有2个概念.
* 并发: 指同时处理多个事情的能力
* 并行: 指同一个时间处理多个事情
常用的并发编程有几种形式:
* 多线程: 通过将不同的工作分配到不同线程的方式
* 多进程: 通过将不同的工作分配到不同进程的方式
* 协程: 自定义实现的调度器, 每个任务在需要等待的时候主动交出控制权到其他任务中
线程与进程
进程是计算机中的程序关于某数据集合上的一次运行活动,是 系统 进行资源分配和调度的基本单位.
线程是操作系统能够进行运算调度的最小单位。它被包含在进程之中,是进程中的实际运作单位。
简而言之: 操作系统管理进程, 进程可以创建或者销毁线程(主线程除外), 但是线程由系统进行调度.
多线程
在使用多线程过程中, 一般会使用Master/Worker形式. 我们先从最简单的多线程开始, 再到到通用的多线程模型.
在Python中使用多线程需要引入threading库, 使用其中的Thread类来创建线程.
End of explanation
def worker(thread_id: int, queue: Queue):
while True:
data = queue.get(True)
print("worker-{} receive task: ".format(thread_id), data)
time.sleep(data)
if __name__ == '__main__':
threads = []
task_queue = Queue()
for i in range(3):
threads.append(Thread(target=worker, args=(i, task_queue), daemon=True))
list(map(lambda t: t.start(), threads))
for i in range(10):
task_queue.put(random.randint(1, 3))
time.sleep(10)
Explanation: 在以上代码中, 我们创建了2个线程分别根据参数进行打印操作. 思考为什么一个打印完了之后才打印另一个?
Master/Worker形式
在通常开发中, 一般很少直接将某一部分业务直接分配给一个方法. 更通用的做法是主线程进行分发任务, worker根据master给出的任务执行对应的工作.
End of explanation
amount = 0
def worker(count):
global amount
for i in range(count):
amount = amount + 1
if __name__ == '__main__':
t1 = Thread(target=worker, args=(10,))
t2 = Thread(target=worker, args=(20,))
t3 = Thread(target=worker, args=(30,))
t1.start()
t2.start()
t3.start()
t3.join()
print(amount)
Explanation: 在以上例子中, 我们通过在主线程中往队列中发送"任务"来达到分配任务的目的.
而worker中通过从队列中获取数据, 并根据任务数据执行相应的任务.
在这个过程中, worker只要一做完当前工作, 就会在队列处等待新任务的到来.
这样就可以确保任务能被第一时间消化, 工作非常的饱和. 自然性价比就出来了.
多线程中的竞态问题
在多线程中, 如果多个线程同时对一个资源进行操作, 那后果将是灾难性的. 思考以下例子, 请思考输出和输出该值的原因的是什么
End of explanation
amount = 0
lock = Lock()
def worker(count):
global amount
for i in range(count):
lock.acquire(True)
amount = amount + 1
lock.release()
Explanation: 同步锁机制
解决以上问题的方法就是增加一个锁, 在修改时防止其他线程修改. 确保同一时刻只有一个线程在操作共享变量.
End of explanation
import time
# 引入多进程支持
from multiprocessing import Process
def say_hello(name):
for i in range(10):
print("hello {}".format(name))
time.sleep(1)
if __name__ == '__main__':
process1 = Process(target=say_hello, args=('small red',))
process2 = Process(target=say_hello, args=('small light',))
process1.start()
process2.start()
Explanation: 请思考以上还有可能出现什么问题?正确的做法是什么?
多进程
多进程与多线程基本类似, 不同的是, 多进程是以自身为蓝本创建一个新的进程, 并且开辟一片新的内存空间.需要注意的点有以下几点.
* 多进程会采用写时复制技术来降低创建进程带来的内存拷贝开销, 共享变量只有在修改时才会进行拷贝
* 多进程的开销会比多线程来的更大, 但是可以实现更好的并发效果
* 多进程编程中不需要考虑变量之间的同步关系, 但是还是需要注意进程之外的同步问题(文件读写)
* 多进程中一般使用临界区, 互斥量, 信号量或者事件来进行同步操作
python中使用multiprocessing库来支持多进程编程
End of explanation
var = 0
def worker(worker_id):
global var
print(worker_id, id(var), var)
var = worker_id
print(worker_id, id(var), var)
if __name__ == '__main__':
process1 = Process(target=worker, args=(1,))
process2 = Process(target=worker, args=(2,))
print(id(var), var)
process1.start()
process2.start()
process2.join()
print(id(var), var)
Explanation: 写时复制
写入时复制是一种计算机程序设计领域的优化策略。其核心思想是,如果有多个调用者同时请求相同资源(如内存或磁盘上的数据存储),他们会共同获取相同的指针指向相同的资源,直到某个调用者试图修改资源的内容时,系统才会真正复制一份专用副本(private copy)给该调用者,而其他调用者所见到的最初的资源仍然保持不变。这个过程对其他的调用者是透明的. 我们来验证一下
End of explanation
lock = Lock()
def worker(count):
for i in range(count):
# lock.acquire(True)
with open('amount.txt', 'r+') as w:
amount = str(int(w.read()) + 1)
w.seek(0)
w.write(amount)
# lock.release()
if __name__ == '__main__':
with open('amount.txt', 'w') as fp:
fp.write('0')
p1 = Process(target=worker, args=(1000,))
p2 = Process(target=worker, args=(2000,))
p3 = Process(target=worker, args=(3000,))
p1.start()
p2.start()
p3.start()
p3.join()
Explanation: 从以上程序的输出, 我们可以看到, 操作系统在我们修改变量的时候才会拷贝这个变量.
同时需要注意到, 我们针对全局变量的修改会被全部隔离开.
多进程中的锁
多进程中的加锁方式与多线程中加锁方式, 在代码上并无区别, 但是在实现原理上却并不一样. 思考下多进程加锁如何实现?
End of explanation
import asyncio, threading
async def hello():
print('Hello World! [THREAD %s] 1' % (threading.currentThread()))
await asyncio.sleep(2)
print('Hello Python! [THREAD %s] 1' % (threading.currentThread()))
async def world():
print('Hello World! [THREAD %s] 2' % (threading.currentThread()))
await asyncio.sleep(5)
print('Hello Python! [THREAD %s] 2' % (threading.currentThread()))
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait([hello(), world()]))
loop.close()
Explanation: 协程
协程与多线程多进程的原理完全不用, 协程使用过在一个线程内尽可能的执行更多的指令以达到并发处理的能力.相对比与多线程与多线程
* 协程更加轻量化, 一个协程就是一个函数, 在协程之间切换只需要切换栈空间和寄存器内容即可.
* 协程只适用于IO密集型任务, 不适用于计算密集型任务.
* 协程能最大化利用单核性能, 但是并不能最大化利用多核性能.(新的方向是同时使用多进程和协程)
python中为协程提供支持的是asyncio这个库. 我们这边只做简单介绍.
End of explanation
import asyncio
import httpx
async def main():
client = httpx.AsyncClient()
for i in range(1000):
print(await client.get('http://www.baidu.com'))
if __name__ == '__main__':
loop = asyncio.new_event_loop()
loop.run_until_complete(main())
loop.close()
Explanation: 以下使用httpx库配合使用asyncio来实现快速的抓取网页
End of explanation
# 查看sources目录内容
Explanation: asyncio原理
Python的asyncio实现的原理就是对generator的极致应用.
* 使用yield交出控制权
* 使用调度器调度所有的协程(生成器)
以下我们实现个简单的自己的asyncio
End of explanation |
14,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
y[n + 1] = y[n] + h * derivs(y[n],x[n])
return y
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: Euler's method
Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation
$$ \frac{dy}{dx} = f(y(x), x) $$
with the initial condition:
$$ y(x_0)=y_0 $$
Euler's method performs updates using the equations:
$$ y_{n+1} = y_n + h f(y_n,x_n) $$
$$ h = x_{n+1} - x_n $$
Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
# y[n + 1] = y[n] + h * ((derivs(y[n]+(h/2)) * derivs(y[n],x[n]), x[n]) * (y[n] + (h/2) * derivs(y[n],x[n]) + (h/2)))
y[n+1] = y[n] + h * derivs(y[n] + h/2 * derivs(y[n],x[n]), x[n] + h/2)
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
Explanation: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:
$$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$
Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
End of explanation
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
# YOUR CODE HERE
#raise NotImplementedError()
y = 0.25*np.exp(2*x) - 0.5*x - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
Explanation: You are now going to solve the following differential equation:
$$
\frac{dy}{dx} = x + 2y
$$
which has the analytical solution:
$$
y(x) = 0.25 e^{2x} - 0.5 x - 0.25
$$
First, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:
End of explanation
# YOUR CODE HERE
# raise NotImplementedError()
x = np.linspace(0,1.0,11)
y = np.empty_like(x)
y0 = y[0]
def derivs(y, x):
return x+2*y
plt.plot(solve_euler(derivs, y0, x), label = 'euler')
plt.plot(solve_midpoint(derivs, y0, x), label = 'midpoint')
plt.plot(solve_exact(x), label = 'exact')
plt.plot(odeint(derivs, y0, x), label = 'odeint')
assert True # leave this for grading the plots
Explanation: In the following cell you are going to solve the above ODE using four different algorithms:
Euler's method
Midpoint method
odeint
Exact
Here are the details:
Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).
Define the derivs function for the above differential equation.
Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute
the solutions using the 4 approaches.
Visualize the solutions on a sigle figure with two subplots:
Plot the $y(x)$ versus $x$ for each of the 4 approaches.
Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches.
Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.
While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
End of explanation |
14,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 10
Step1: Notice that in the previous example, the function takes no arguments and returns nothing. It just does the task that it's supposed to.
Example
Step2: Note the cobbDouglas() has a docstring. The docstring is optional, but it tells users about the function. The contents of the docstring can be accessed with the help() function. It's good practice to make use of doc strings.
Step3: The Solow model with exogenous population growth
Recall the Solow growth model with exogenous labor growth
Step4: Example
Step5: Example
Step6: Example | Python Code:
def hi():
print('Hello world!')
hi()
Explanation: Class 10: User-defined functions and a Solow growth model example
User-defined functions
Create a new function by using the def keyword followed by the designated name of the new function. In the definition, the function name has to be followed by a set of parentheses and a colon. If the function has arguments, put them inside the parentheses separated by commas. The code to be performed when the function is run follows the definition line and is indented.
Example: A function that prints Hello world!
End of explanation
def cobbDouglas(A,alpha,k):
''' Computes output per worker y given A, alpha, and a value of capital per worker k
Args:
A (float): TFP
alpha (float): Cobb-Douglas parameter
k (float or numpy array): capital per worker
Returns
float or numpy array'''
return A*k**alpha
Explanation: Notice that in the previous example, the function takes no arguments and returns nothing. It just does the task that it's supposed to.
Example:
A function that returns the computes the fllowing production function:
\begin{align}
y & = A k^{\alpha}
\end{align}
End of explanation
# Use cobbDouglas to plot the production function for a bunch of values of alpha between 0 and 1.
Explanation: Note the cobbDouglas() has a docstring. The docstring is optional, but it tells users about the function. The contents of the docstring can be accessed with the help() function. It's good practice to make use of doc strings.
End of explanation
def solow_example(A,alpha,delta,s,n,K0,L0,T):
'''Returns DataFrame with simulated values for a Solow model with labor growth and constant TFP
Args:
A (float): TFP
alpha (float): Cobb-Douglas production function parameter
delta (float): capital deprection rate
s (float): saving rate
n (float): labor force growth rate
K0 (float): initial capital stock
L0 (float): initial labor force
T (int): number of periods to simulate
Returns:
pandas DataFrame with columns:
'capital', 'labor', 'output', 'consumption', 'investment',
'capital_pw','output_pw', 'consumption_pw', 'investment_pw'
'''
# Initialize a variable called capital as a (T+1)x1 array of zeros and set first value to K0
capital = np.zeros(T+1)
capital[0] = K0
# Initialize a variable called labor as a (T+1)x1 array of zeros and set first value to L0
labor = np.zeros(T+1)
labor[0] = L0
# Compute all capital and labor values by iterating over t from 0 through T
for t in np.arange(T):
labor[t+1] = (1+n)*labor[t]
capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t]
# Store the simulated capital df in a pandas DataFrame called data
df = pd.DataFrame({'capital':capital,'labor':labor})
# Create columns in the DataFrame to store computed values of the other endogenous variables
df['output'] = df['capital']**alpha*df['labor']**(1-alpha)
df['consumption'] = (1-s)*df['output']
df['investment'] = df['output'] - df['consumption']
# Create columns in the DataFrame to store capital per worker, output per worker, consumption per worker, and investment per worker
df['capital_pw'] = df['capital']/df['labor']
df['output_pw'] = df['output']/df['labor']
df['consumption_pw'] = df['consumption']/df['labor']
df['investment_pw'] = df['investment']/df['labor']
return df
Explanation: The Solow model with exogenous population growth
Recall the Solow growth model with exogenous labor growth:
\begin{align}
Y_t & = AK_t^{\alpha} L_t^{1-\alpha}\tag{1}
\end{align}
The supply of labor grows at an exogenously determined rate $n$ and so it's value is determined recursively by a first-order difference equation:
\begin{align}
L_{t+1} & = (1+n) L_t \tag{2}
\end{align}
The rest of the economy is characterized by the same equations as before:
\begin{align}
C_t & = (1-s)Y_t \tag{3}\
Y_t & = C_t + I_t \tag{4}\
K_{t+1} & = I_t + ( 1- \delta)K_t \tag{5}\
\end{align}
Combine Equations (1), (3), (4), and (5) to eliminate $C_t$, $I_t$, and $Y_t$ and obtain a recurrence relation specifying $K_{t+1}$ as a funtion of $K_t$ and $L_t$:
\begin{align}
K_{t+1} & = sAK_t^{\alpha}L_t^{1-\alpha} + ( 1- \delta)K_t \tag{6}
\end{align}
Given an initial values for capital and labor, Equations (2) and (6) can be iterated on to compute the values of the capital stock and labor supply at some future date $T$. Furthermore, the values of consumption, output, and investment at date $T$ can also be computed using Equations (1), (3), (4), and (5).
Use a function to simulate the model
Suppose that we wanted to simulate the Solow model with different parameter values so that we could compare the simulations. Since we'd be doing the same basic steps multiple times using different numbers, it would make sense to define a function so that we could avoid repetition.
The code below defines a function called solow_example() that simulates the Solow model with exogenous labor growth. solow_example() takes as arguments the parameters of the Solow model $A$, $\alpha$, $\delta$, $s$, and $n$; the initial values $K_0$ and $L_0$; and the number of simulation periods $T$. solow_example() returns a Pandas DataFrame with computed values for aggregate and per worker quantities.
End of explanation
# Create the DataFrame with simulated values
df = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df['investment_pw'],lw=3)
ax.grid()
ax.set_title('Investment per worker')
Explanation: Example: A single simulation
Use the function solow_example() to simulate the Solow growth model with exogenous labor growth for $t=0\ldots 100$. For the simulation, assume the following values of the parameters:
\begin{align}
A & = 10\
\alpha & = 0.35\
s & = 0.15\
\delta & = 0.1\
n & = 0.01
\end{align}
Furthermore, suppose that the initial values of capital and labor are:
\begin{align}
K_0 & = 20\
L_0 & = 1
\end{align}
End of explanation
df1 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=20,L0=1,T=100)
df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df1['capital_pw'],lw=3)
ax.plot(df2['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df1['output_pw'],lw=3)
ax.plot(df2['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df1['consumption_pw'],lw=3)
ax.plot(df2['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df1['investment_pw'],lw=3,label='$k_0=20$')
ax.plot(df2['investment_pw'],lw=3,label='$k_0=10$')
ax.grid()
ax.set_title('Investment per worker')
ax.legend(loc='lower right')
Explanation: Example: Two simulations with different initial capital stocks
Repeat the previous exercise for two simulations of the Solow model having two different initial values of capital: $K_0 = 20$ and $K_0'=10$.
End of explanation
df1 = solow_example(A=5,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
df2 = solow_example(A=10,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
df3 = solow_example(A=15,alpha=0.35,delta=0.1,s=0.15,n=0.01,K0=10,L0=1,T=100)
# Create a 2x2 grid of plots of the capital per worker, output per worker, consumption per worker, and investment per worker
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(2,2,1)
ax.plot(df1['capital_pw'],lw=3)
ax.plot(df2['capital_pw'],lw=3)
ax.plot(df3['capital_pw'],lw=3)
ax.grid()
ax.set_title('Capital per worker')
ax = fig.add_subplot(2,2,2)
ax.plot(df1['output_pw'],lw=3)
ax.plot(df2['output_pw'],lw=3)
ax.plot(df3['output_pw'],lw=3)
ax.grid()
ax.set_title('Output per worker')
ax = fig.add_subplot(2,2,3)
ax.plot(df1['consumption_pw'],lw=3)
ax.plot(df2['consumption_pw'],lw=3)
ax.plot(df3['consumption_pw'],lw=3)
ax.grid()
ax.set_title('Consumption per worker')
ax = fig.add_subplot(2,2,4)
ax.plot(df1['investment_pw'],lw=3,label='$A=20$')
ax.plot(df2['investment_pw'],lw=3,label='$A=10$')
ax.plot(df3['investment_pw'],lw=3,label='$A=10$')
ax.grid()
ax.set_title('Investment per worker')
ax.legend(loc='lower right',ncol=3)
Explanation: Example: Three simulations with different TFPs
Repeat the previous exercise for two simulations of the Solow model having two different initial values of capital: $K_0 = 20$ and $K_0'=10$.
End of explanation |
14,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a model with traffic_last_5min feature
Introduction
In this notebook, we'll train a taxifare prediction model but this time with an additional feature of traffic_last_5min.
Step1: Load raw data
Step2: Use tf.data to read the CSV files
These functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature traffic_last_5min.
Step3: Build a simple keras DNN model
Step4: Next, we can call the build_model to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
Step5: Export and deploy model
Step6: Note that the last gcloud call below, which deploys the mode, can take a few minutes, and you might not see the earlier echo outputs while that job is still running. If you want to make sure that your notebook is not stalled and your model is actually getting deployed, view your models in the console at https | Python Code:
import os
import shutil
from datetime import datetime
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai/region $REGION
Explanation: Training a model with traffic_last_5min feature
Introduction
In this notebook, we'll train a taxifare prediction model but this time with an additional feature of traffic_last_5min.
End of explanation
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
Explanation: Load raw data
End of explanation
CSV_COLUMNS = [
"fare_amount",
"dayofweek",
"hourofday",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"traffic_last_5min",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
"dayofweek",
"hourofday",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"traffic_last_5min",
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS
}
Explanation: Use tf.data to read the CSV files
These functions for reading data from the csv files are similar to what we used in the Introduction to Tensorflow module. Note that here we have an addtional feature traffic_last_5min.
End of explanation
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evaluation metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
Explanation: Build a simple keras DNN model
End of explanation
HIDDEN_UNITS = [32, 8]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-traffic-train*",
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN,
)
evalds = create_dataset(
pattern="../data/taxi-traffic-valid*",
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL,
).take(NUM_EVAL_EXAMPLES // 1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)],
)
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(
x={
"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114]),
},
steps=1,
)
Explanation: Next, we can call the build_model to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
End of explanation
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.now().strftime("%Y%m%d%H%M%S"))
model.save(EXPORT_PATH) # with default serving function
os.environ["EXPORT_PATH"] = EXPORT_PATH
Explanation: Export and deploy model
End of explanation
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=taxifare_$TIMESTAMP
ENDPOINT_DISPLAYNAME=taxifare_endpoint_$TIMESTAMP
IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-5:latest"
ARTIFACT_DIRECTORY=gs://${BUCKET}/${MODEL_DISPLAYNAME}/
echo $ARTIFACT_DIRECTORY
gsutil cp -r ${EXPORT_PATH}/* ${ARTIFACT_DIRECTORY}
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=$ARTIFACT_DIRECTORY \
--format="value(model)")
MODEL_ID=$(echo $MODEL_RESOURCENAME | cut -d"/" -f6)
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
echo "MODEL_ID=${MODEL_ID}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
ENDPOINT_ID=$(echo $ENDPOINT_RESOURCENAME | cut -d"/" -f6)
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
echo "ENDPOINT_ID=${ENDPOINT_ID}"
# Deployment
DEPLOYEDMODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
MIN_REPLICA_COUNT=1
MAX_REPLICA_COUNT=3
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYEDMODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=$MIN_REPLICA_COUNT \
--max-replica-count=$MAX_REPLICA_COUNT \
--traffic-split=0=100
Explanation: Note that the last gcloud call below, which deploys the mode, can take a few minutes, and you might not see the earlier echo outputs while that job is still running. If you want to make sure that your notebook is not stalled and your model is actually getting deployed, view your models in the console at https://console.cloud.google.com/vertex-ai/models, click on your model, and you should see your endpoint listed with an "in progress" icon next to it.
End of explanation |
14,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make a plot of HICP inflation by item groups
Step1: Compute annual inflation rates
Step2: df_infl_items.rename(columns = dic)
tt = df_infl_items.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
melted_df = pd.melt(tt,id_vars=['month','year'])
melted_df.head()
Step3: df_infl_items['month'] = df_infl_items.index.month
df_infl_items['year'] = df_infl_items.index.year
Step4: Generate a bunch of histograms of the data to make sure that all of the data
is in an expected range.
with plt.style.context('https | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from datetime import datetime
import numpy as np
from matplotlib.ticker import FixedLocator, FixedFormatter
#import seaborn as sns
to_colors = lambda x : x/255.
ls
df_ind_items = pd.read_csv('raw_data_items.csv',header=0,index_col=0,parse_dates=0)
df_ind_items.head()
df_ind_items.index
Explanation: Make a plot of HICP inflation by item groups
End of explanation
df_infl_items = df_ind_items.pct_change(periods=12)*100
mask_rows_infl = df_infl_items.index.year >= 2000
df_infl_items = df_infl_items[mask_rows_infl]
df_infl_items.tail()
tt = df_infl_items.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
tt.head()
tt.to_csv('infl_items.csv')
Explanation: Compute annual inflation rates
End of explanation
df_infl_items['min'] = df_infl_items.apply(min,axis=1)
df_infl_items['max'] = df_infl_items.apply(max,axis=1)
df_infl_items['mean'] = df_infl_items.apply(np.mean,axis=1)
df_infl_items['mode'] = df_infl_items.quantile(q=0.5, axis=1)
df_infl_items['10th'] = df_infl_items.quantile(q=0.10, axis=1)
df_infl_items['90th'] = df_infl_items.quantile(q=0.90, axis=1)
df_infl_items['25th'] = df_infl_items.quantile(q=0.25, axis=1)
df_infl_items['75th'] = df_infl_items.quantile(q=0.75, axis=1)
df_infl_items.tail()
Explanation: df_infl_items.rename(columns = dic)
tt = df_infl_items.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
melted_df = pd.melt(tt,id_vars=['month','year'])
melted_df.head()
End of explanation
df_infl_items.head()
print(df_infl_items.describe())
Explanation: df_infl_items['month'] = df_infl_items.index.month
df_infl_items['year'] = df_infl_items.index.year
End of explanation
len(df_infl_items)
df_infl_items.columns
df_infl_items['month_order'] = range(len(df_infl_items))
month_order = df_infl_items['month_order']
max_infl = df_infl_items['max'].values
min_infl = df_infl_items['min'].values
mean_infl = df_infl_items['mean'].values
mode_infl = df_infl_items['mode'].values
p25th = df_infl_items['25th'].values
p75th = df_infl_items['75th'].values
p10th = df_infl_items['10th'].values
p90th = df_infl_items['90th'].values
inflEA = df_infl_items['76451'].values
year_begin_df = df_infl_items[df_infl_items.index.month == 1]
year_begin_df;
year_beginning_indeces = list(year_begin_df['month_order'].values)
year_beginning_indeces
year_beginning_names = list(year_begin_df.index.year)
year_beginning_names
month_order;
blue3 = map(to_colors, (24, 116, 205)) # 1874CD
wheat2 = map(to_colors, (238, 216, 174)) # EED8AE
wheat3 = map(to_colors, (205, 186, 150)) # CDBA96
wheat4 = map(to_colors, (139, 126, 102)) # 8B7E66
firebrick3 = map(to_colors, (205, 38, 38)) # CD2626
gray30 = map(to_colors, (77, 77, 77)) # 4D4D4D
fig, ax1 = plt.subplots(figsize=(15,7))
plt.bar(month_order, p90th - p10th, bottom=p10th,
edgecolor='none', color='#C3BBA4', width=1);
# Create the bars showing average highs and lows
plt.bar(month_order, p75th - p25th, bottom=p25th,
edgecolor='none', color='#9A9180', width=1);
#annotations={month_order[50]:'Dividends'}
plt.plot(month_order, inflEA, color='#5A3B49',linewidth=2 );
plt.plot(month_order, mode_infl, color='wheat',linewidth=2,alpha=.3);
plt.xticks(year_beginning_indeces,
year_beginning_names,
fontsize=10)
#ax2 = ax1.twiny()
plt.xlim(-5,200)
plt.grid(False)
##ax2 = ax1.twiny()
plt.ylim(-5, 10)
#ax3 = ax1.twinx()
plt.yticks(range(-4, 10, 2), [r'{}'.format(x)
for x in range(-4, 10, 2)], fontsize=10);
plt.grid(axis='both', color='wheat', linewidth=1.5, alpha = .5)
plt.title('HICP inflation, annual rate of change, Jan 2000 - March 2016\n\n', fontsize=20);
Explanation: Generate a bunch of histograms of the data to make sure that all of the data
is in an expected range.
with plt.style.context('https://gist.githubusercontent.com/rhiever/d0a7332fe0beebfdc3d5/raw/223d70799b48131d5ce2723cd5784f39d7a3a653/tableau10.mplstyle'):
for column in df_infl_items.columns[:-2]:
#if column in ['date']:
# continue
plt.figure()
plt.hist(df_infl_items[column].values)
plt.title(column)
#plt.savefig('{}.png'.format(column))
End of explanation |
14,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoregressive Moving Average (ARMA)
Step1: Sunpots Data
Step2: Does our model obey the theory?
Step3: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do? | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
This notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class.
End of explanation
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
dta.plot(figsize=(12,4));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False)
print(arma_mod20.params)
arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False)
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
Explanation: Sunpots Data
End of explanation
sm.stats.durbin_watson(arma_mod30.resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
ax = plt.plot(arma_mod30.resid)
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
Explanation: Does our model obey the theory?
End of explanation
predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True)
fig, ax = plt.subplots(figsize=(12, 8))
dta.ix['1950':].plot(ax=ax)
predict_sunspots.plot(ax=ax, style='r');
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation |
14,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining a Milky Way potential model
Step1: Introduction
gala provides a simple and easy way to access and integrate orbits in an
approximate mass model for the Milky Way. The parameters of the mass model are
determined by least-squares fitting the enclosed mass profile of a pre-defined
potential form to recent measurements compiled from the literature. These
measurements are provided with the documentation of gala and are shown below.
The radius units are kpc, and mass units are solar masses
Step2: Let's now plot the above data and uncertainties
Step3: We now need to assume some form for the potential. For simplicity and within reason, we'll use a four component potential model consisting of a Hernquist (1990) bulge and nucleus, a Miyamoto-Nagai (1975) disk, and an NFW (1997) halo. We'll fix the parameters of the disk and bulge to be consistent with previous work (Bovy 2015 - please cite that paper if you use this potential model) and vary the scale mass and scale radius of the nucleus and halo, respectively. We'll fit for these parameters in log-space, so we'll first define a function that returns a gala.potential.CCompositePotential object given these four parameters
Step4: We now need to specify an initial guess for the parameters - let's do that (by making them up), and then plot the initial guess potential over the data
Step5: It looks pretty good already! But let's now use least-squares fitting to optimize our nucleus and halo parameters. We first need to define an error function
Step6: Because the uncertainties are all approximately but not exactly symmetric, we'll take the maximum of the upper and lower uncertainty values and assume that the uncertainties in the mass measurements are Gaussian (a bad but simple assumption)
Step7: Now we have a best-fit potential! Let's plot the enclosed mass of the fit potential over the data
Step8: This potential is already implemented in gala in gala.potential.special, and we can import it with | Python Code:
# Third-party dependencies
from astropy.io import ascii
import astropy.units as u
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import leastsq
# Gala
from gala.mpl_style import mpl_style
plt.style.use(mpl_style)
import gala.dynamics as gd
import gala.integrate as gi
import gala.potential as gp
from gala.units import galactic
%matplotlib inline
Explanation: Defining a Milky Way potential model
End of explanation
tbl = ascii.read('data/MW_mass_enclosed.csv')
tbl
Explanation: Introduction
gala provides a simple and easy way to access and integrate orbits in an
approximate mass model for the Milky Way. The parameters of the mass model are
determined by least-squares fitting the enclosed mass profile of a pre-defined
potential form to recent measurements compiled from the literature. These
measurements are provided with the documentation of gala and are shown below.
The radius units are kpc, and mass units are solar masses:
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
Explanation: Let's now plot the above data and uncertainties:
End of explanation
def get_potential(log_M_h, log_r_s, log_M_n, log_a):
mw_potential = gp.CCompositePotential()
mw_potential['bulge'] = gp.HernquistPotential(m=5E9, c=1., units=galactic)
mw_potential['disk'] = gp.MiyamotoNagaiPotential(m=6.8E10*u.Msun, a=3*u.kpc, b=280*u.pc,
units=galactic)
mw_potential['nucl'] = gp.HernquistPotential(m=np.exp(log_M_n), c=np.exp(log_a)*u.pc,
units=galactic)
mw_potential['halo'] = gp.NFWPotential(m=np.exp(log_M_h), r_s=np.exp(log_r_s), units=galactic)
return mw_potential
Explanation: We now need to assume some form for the potential. For simplicity and within reason, we'll use a four component potential model consisting of a Hernquist (1990) bulge and nucleus, a Miyamoto-Nagai (1975) disk, and an NFW (1997) halo. We'll fix the parameters of the disk and bulge to be consistent with previous work (Bovy 2015 - please cite that paper if you use this potential model) and vary the scale mass and scale radius of the nucleus and halo, respectively. We'll fit for these parameters in log-space, so we'll first define a function that returns a gala.potential.CCompositePotential object given these four parameters:
End of explanation
# Initial guess for the parameters- units are:
# [Msun, kpc, Msun, pc]
x0 = [np.log(6E11), np.log(20.), np.log(2E9), np.log(100.)]
init_potential = get_potential(*x0)
xyz = np.zeros((3, 256))
xyz[0] = np.logspace(-3, 3, 256)
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
fit_menc = init_potential.mass_enclosed(xyz*u.kpc)
ax.loglog(xyz[0], fit_menc.value, marker='', color="#3182bd",
linewidth=2, alpha=0.7)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
Explanation: We now need to specify an initial guess for the parameters - let's do that (by making them up), and then plot the initial guess potential over the data:
End of explanation
def err_func(p, r, Menc, Menc_err):
pot = get_potential(*p)
xyz = np.zeros((3,len(r)))
xyz[0] = r
model_menc = pot.mass_enclosed(xyz).to(u.Msun).value
return (model_menc - Menc) / Menc_err
Explanation: It looks pretty good already! But let's now use least-squares fitting to optimize our nucleus and halo parameters. We first need to define an error function:
End of explanation
err = np.max([tbl['Menc_err_pos'], tbl['Menc_err_neg']], axis=0)
p_opt, ier = leastsq(err_func, x0=x0, args=(tbl['r'], tbl['Menc'], err))
assert ier in range(1,4+1), "least-squares fit failed!"
fit_potential = get_potential(*p_opt)
Explanation: Because the uncertainties are all approximately but not exactly symmetric, we'll take the maximum of the upper and lower uncertainty values and assume that the uncertainties in the mass measurements are Gaussian (a bad but simple assumption):
End of explanation
xyz = np.zeros((3, 256))
xyz[0] = np.logspace(-3, 3, 256)
fig, ax = plt.subplots(1, 1, figsize=(4,4))
ax.errorbar(tbl['r'], tbl['Menc'], yerr=(tbl['Menc_err_neg'], tbl['Menc_err_pos']),
marker='o', markersize=2, color='k', alpha=1., ecolor='#aaaaaa',
capthick=0, linestyle='none', elinewidth=1.)
fit_menc = fit_potential.mass_enclosed(xyz*u.kpc)
ax.loglog(xyz[0], fit_menc.value, marker='', color="#3182bd",
linewidth=2, alpha=0.7)
ax.set_xlim(1E-3, 10**2.6)
ax.set_ylim(7E6, 10**12.25)
ax.set_xlabel('$r$ [kpc]')
ax.set_ylabel('$M(<r)$ [M$_\odot$]')
ax.set_xscale('log')
ax.set_yscale('log')
fig.tight_layout()
Explanation: Now we have a best-fit potential! Let's plot the enclosed mass of the fit potential over the data:
End of explanation
from gala.potential import MilkyWayPotential
potential = MilkyWayPotential()
potential
Explanation: This potential is already implemented in gala in gala.potential.special, and we can import it with:
End of explanation |
14,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cloud Dataflow Tutorial
事前準備
Google Cloud Platform の課金設定
Dataflow APIの有効化
GCSのBucketを作る
BigQueryにtestdatasetというデータセットを作る
Datalabを起動
That's it!
このNotebookをコピーするには
Datalabを開いたら、Notebookを新規に開いてください。
その後、セルに次のコードを入力して実行してください。
!git clone https
Step1: Dataflowの基本設定
ジョブ名、プロジェクト名、一時ファイルの置き場を指定します。
Step2: Dataflowのスケール設定
Workerの最大数や、マシンタイプ等を設定します。
WorkerのDiskサイズはデフォルトで250GB(Batch)、420GB(Streaming)と大きいので、ここで必要サイズを指定する事をオススメします。
Step3: 実行環境の切り替え
DirectRunner
Step4: 準備は完了、以下パイプラインの例
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
パイプラインその1
GCSからファイルを読み込み、GCSにその内容を書き込むだけ
+----------------+
| |
| Read GCS File |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Write GCS File |
| |
+----------------+
Step5: パイプラインその2
BigQueryからデータを読み込み、GCSにその内容を書き込むだけ
BigQueryのデータセットは以下
https
Step6: パイプラインその3
BigQueryからデータを読み込み、BigQueryにデータを書き込む
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Write BigQuery |
| |
+----------------+
Step7: パイプラインその4
BigQueryからデータを読み込み
データを加工して
BigQueryに書き込む
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Modify Element |
| |
+----------------+
|
v
+-------+--------+
| |
| Write BigQuery |
| |
+----------------+
Step8: パイプラインその5
ブランチを分ける例
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
+---------------------+
| |
+-------v--------+ +-------v--------+
| | | |
| Modify Element | | Modify Element |
| | | |
+-------+--------+ +-------+--------+
| |
+---------------------+
|
+-------v--------+
| |
| Flatten |
| |
+-------+--------+
|
|
+-------v--------+
| |
| Save BigQuery |
| |
+----------------+
Step9: パイプラインその6
Groupbyを使う
Step10: パイプラインその7
WindowでGroupByの区間を区切る | Python Code:
import apache_beam as beam
Explanation: Cloud Dataflow Tutorial
事前準備
Google Cloud Platform の課金設定
Dataflow APIの有効化
GCSのBucketを作る
BigQueryにtestdatasetというデータセットを作る
Datalabを起動
That's it!
このNotebookをコピーするには
Datalabを開いたら、Notebookを新規に開いてください。
その後、セルに次のコードを入力して実行してください。
!git clone https://github.com/hayatoy/dataflow-tutorial.git
先頭の" ! "を忘れずに入力してください。
実行する前に・・
Project名を変更してください。Esc->Fで一括置換できます。
<font color="red">注意:runAllを実行しないでください。全部実行するのに時間がかかります。</font>
このNotebookはDatalab (Dataflow 0.6.0)用です。
Dataflow 2.0.0以降で使う場合は beam.utils の部分を beam.options に変更してください。
Apache Beamのインポート
End of explanation
options = beam.utils.pipeline_options.PipelineOptions()
gcloud_options = options.view_as(
beam.utils.pipeline_options.GoogleCloudOptions)
gcloud_options.job_name = 'dataflow-tutorial1'
gcloud_options.project = 'PROJECTID'
gcloud_options.staging_location = 'gs://PROJECTID/staging'
gcloud_options.temp_location = 'gs://PROJECTID/temp'
Explanation: Dataflowの基本設定
ジョブ名、プロジェクト名、一時ファイルの置き場を指定します。
End of explanation
worker_options = options.view_as(beam.utils.pipeline_options.WorkerOptions)
worker_options.disk_size_gb = 20
worker_options.max_num_workers = 2
# worker_options.num_workers = 2
# worker_options.machine_type = 'n1-standard-8'
# worker_options.zone = 'asia-northeast1-a'
Explanation: Dataflowのスケール設定
Workerの最大数や、マシンタイプ等を設定します。
WorkerのDiskサイズはデフォルトで250GB(Batch)、420GB(Streaming)と大きいので、ここで必要サイズを指定する事をオススメします。
End of explanation
options.view_as(beam.utils.pipeline_options.StandardOptions).runner = 'DirectRunner'
# options.view_as(beam.utils.pipeline_options.StandardOptions).runner = 'DataflowRunner'
Explanation: 実行環境の切り替え
DirectRunner: ローカルマシンで実行します
DataflowRunner: Dataflow上で実行します
End of explanation
p1 = beam.Pipeline(options=options)
(p1 | 'read' >> beam.io.ReadFromText('gs://dataflow-samples/shakespeare/kinglear.txt')
| 'write' >> beam.io.WriteToText('gs://PROJECTID/test.txt', num_shards=1)
)
p1.run().wait_until_finish()
Explanation: 準備は完了、以下パイプラインの例
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
パイプラインその1
GCSからファイルを読み込み、GCSにその内容を書き込むだけ
+----------------+
| |
| Read GCS File |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Write GCS File |
| |
+----------------+
End of explanation
p2 = beam.Pipeline(options=options)
query = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'
(p2 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))
| 'write' >> beam.io.WriteToText('gs://PROJECTID/test2.txt', num_shards=1)
)
p2.run().wait_until_finish()
Explanation: パイプラインその2
BigQueryからデータを読み込み、GCSにその内容を書き込むだけ
BigQueryのデータセットは以下
https://bigquery.cloud.google.com/table/bigquery-public-data:samples.shakespeare
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Write GCS File |
| |
+----------------+
End of explanation
p3 = beam.Pipeline(options=options)
# 注意:データセットを作成しておく
query = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'
(p3 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))
| 'write' >> beam.io.Write(beam.io.BigQuerySink(
'testdataset.testtable1',
schema='corpus_date:INTEGER, corpus:STRING, word:STRING, word_count:INTEGER',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
)
p3.run().wait_until_finish()
Explanation: パイプラインその3
BigQueryからデータを読み込み、BigQueryにデータを書き込む
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Write BigQuery |
| |
+----------------+
End of explanation
def modify_data1(element):
# beam.Mapは1行の入力に対し1行の出力をする場合に使う
# element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}
corpus_upper = element['corpus'].upper()
word_len = len(element['word'])
return {'corpus_upper': corpus_upper,
'word_len': word_len
}
p4 = beam.Pipeline(options=options)
query = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'
(p4 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))
| 'modify' >> beam.Map(modify_data1)
| 'write' >> beam.io.Write(beam.io.BigQuerySink(
'testdataset.testtable2',
schema='corpus_upper:STRING, word_len:INTEGER',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
)
p4.run().wait_until_finish()
Explanation: パイプラインその4
BigQueryからデータを読み込み
データを加工して
BigQueryに書き込む
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
v
+-------+--------+
| |
| Modify Element |
| |
+----------------+
|
v
+-------+--------+
| |
| Write BigQuery |
| |
+----------------+
End of explanation
def modify1(element):
# element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}
word_count = len(element['corpus'])
count_type = 'corpus only'
return {'word_count': word_count,
'count_type': count_type
}
def modify2(element):
# element = {u'corpus_date': 0, u'corpus': u'sonnets', u'word': u'LVII', u'word_count': 1}
word_count = len(element['word'])
count_type = 'word only'
return {'word_count': word_count,
'count_type': count_type
}
p5 = beam.Pipeline(options=options)
query = 'SELECT * FROM [bigquery-public-data:samples.shakespeare] LIMIT 10'
query_results = p5 | 'read' >> beam.io.Read(beam.io.BigQuerySource(
project='PROJECTID', use_standard_sql=False, query=query))
# BigQueryの結果を二つのブランチに渡す
branch1 = query_results | 'modify1' >> beam.Map(modify1)
branch2 = query_results | 'modify2' >> beam.Map(modify2)
# ブランチからの結果をFlattenでまとめる
((branch1, branch2) | beam.Flatten()
| 'write' >> beam.io.Write(beam.io.BigQuerySink(
'testdataset.testtable3',
schema='word_count:INTEGER, count_type:STRING',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
)
p5.run().wait_until_finish()
Explanation: パイプラインその5
ブランチを分ける例
+----------------+
| |
| Read BigQuery |
| |
+-------+--------+
|
+---------------------+
| |
+-------v--------+ +-------v--------+
| | | |
| Modify Element | | Modify Element |
| | | |
+-------+--------+ +-------+--------+
| |
+---------------------+
|
+-------v--------+
| |
| Flatten |
| |
+-------+--------+
|
|
+-------v--------+
| |
| Save BigQuery |
| |
+----------------+
End of explanation
def modify_data2(kvpair):
# groupbyによりkeyとそのkeyを持つデータのリストのタプルが渡される
# kvpair = (u'word only', [4, 4, 6, 6, 7, 7, 7, 7, 8, 9])
return {'count_type': kvpair[0],
'sum': sum(kvpair[1])
}
p6 = beam.Pipeline(options=options)
query = 'SELECT * FROM [PROJECTID:testdataset.testtable3] LIMIT 20'
(p6 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))
| 'pair' >> beam.Map(lambda x: (x['count_type'], x['word_count']))
| "groupby" >> beam.GroupByKey()
| 'modify' >> beam.Map(modify_data2)
| 'write' >> beam.io.Write(beam.io.BigQuerySink(
'testdataset.testtable4',
schema='count_type:STRING, sum:INTEGER',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
)
p6.run().wait_until_finish()
Explanation: パイプラインその6
Groupbyを使う
End of explanation
def assign_timevalue(v):
# pcollectionのデータにタイムスタンプを付加する
# 後段のwindowはこのタイムスタンプを基準に分割される
# ここでは適当に乱数でタイムスタンプを入れている
import apache_beam.transforms.window as window
import random
import time
return window.TimestampedValue(v, int(time.time()) + random.randint(0, 1))
def modify_data3(kvpair):
# groupbyによりkeyとそのkeyを持つデータのリストのタプルが渡される
# windowで分割されているのでデータ数が少なくなる
# kvpair = (u'word only', [4, 4, 6, 6, 7])
return {'count_type': kvpair[0],
'sum': sum(kvpair[1])
}
p7 = beam.Pipeline(options=options)
query = 'SELECT * FROM [PROJECTID:testdataset.testtable3] LIMIT 20'
(p7 | 'read' >> beam.io.Read(beam.io.BigQuerySource(project='PROJECTID', use_standard_sql=False, query=query))
| "assign tv" >> beam.Map(assign_timevalue)
| 'window' >> beam.WindowInto(beam.window.FixedWindows(1))
| 'pair' >> beam.Map(lambda x: (x['count_type'], x['word_count']))
| "groupby" >> beam.GroupByKey()
| 'modify' >> beam.Map(modify_data3)
| 'write' >> beam.io.Write(beam.io.BigQuerySink(
'testdataset.testtable5',
schema='count_type:STRING, sum:INTEGER',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE))
)
p7.run().wait_until_finish()
Explanation: パイプラインその7
WindowでGroupByの区間を区切る
End of explanation |
14,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook the datsets for the predictor will be generated.
Step1: Let's first define the list of parameters to use in each dataset.
Step2: Now, let's define the function to generate each dataset.
Step3: Finally, let's parallellize the generation of all the datasets, and generate them. (took some code and suggestions from here | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
Explanation: In this notebook the datsets for the predictor will be generated.
End of explanation
# Input values
GOOD_DATA_RATIO = 0.99 # The ratio of non-missing values for a symbol to be considered good
SAMPLES_GOOD_DATA_RATIO = 0.9 # The ratio of non-missing values for an interval to be considered good
train_val_time = -1 # In real time days (-1 is for the full interval)
''' Step days will be fixed. That means that the datasets with longer base periods will have samples
that are more correlated. '''
step_days = 7 # market days
base_days = [7, 14, 28, 56, 112] # In market days
ahead_days = [7, 14, 28, 56] # market days
datasets_params_list_df = pd.DataFrame([(x,y) for x in base_days for y in ahead_days],
columns=['base_days', 'ahead_days'])
datasets_params_list_df
Explanation: Let's first define the list of parameters to use in each dataset.
End of explanation
def generate_one_set(params):
# print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values))
return params
Explanation: Now, let's define the function to generate each dataset.
End of explanation
from multiprocessing import Pool
num_partitions = datasets_params_list_df.shape[0] #number of partitions to split dataframe
num_cores = 4 #number of cores on your machine
def parallelize_dataframe(df, func):
df_split = np.array_split(df, num_partitions)
pool = Pool(num_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
parallelize_dataframe(datasets_params_list_df, generate_one_set)
Explanation: Finally, let's parallellize the generation of all the datasets, and generate them. (took some code and suggestions from here: http://www.racketracer.com/2016/07/06/pandas-in-parallel/#comments)
End of explanation |
14,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: stringliteral
Step3: String operations
Different ways of String concatenation
Step4: String indexing and slicing
Step5: String indexing
Step6: String slicing
Step7: String slicing with offsets
Step8: String Immutability
Step9: Useful String methods
Case conversions
Step10: String replace
Step11: Numeric checks
Step12: Alphabet checks
Step13: Alphanumeric checks
Step14: String splitting and joining
Step15: String formatting
Simple string formatting expressions - old style
Step16: Formatting expressions with different data types - old style
Step17: Formatting strings using the format method - new style
Step18: Alternative ways of using string format
Step19: Regular Expressions
Step20: Putting it all together - Basic Text Processing and Analysis | Python Code:
new_string = "This is a String" # storing a string
print('ID:', id(new_string)) # shows the object identifier (address)
print('Type:', type(new_string)) # shows the object type
print('Value:', new_string) # shows the object value
# simple string
simple_string = 'Hello!' + " I'm a simple string"
print(simple_string)
# multi-line string, note the \n (newline) escape character automatically created
multi_line_string = Hello I'm
a multi-line
string!
multi_line_string
print(multi_line_string)
# Normal string with escape sequences leading to a wrong file path!
escaped_string = "C:\the_folder\new_dir\file.txt"
print(escaped_string) # will cause errors if we try to open a file here
# raw string keeping the backslashes in its normal form
raw_string = r'C:\the_folder\new_dir\file.txt'
print(raw_string)
# unicode string literals
string_with_unicode = 'H\u00e8llo!'
print(string_with_unicode)
more_unicode = 'I love Pizza 🍕! Shall we book a cab 🚕 to get pizza?'
print(more_unicode)
print(string_with_unicode + '\n' + more_unicode)
' '.join([string_with_unicode, more_unicode])
more_unicode[::-1] # reverses the string
Explanation: stringliteral ::= [stringprefix](shortstring | longstring)
stringprefix ::= "r" | "u" | "ur" | "R" | "U" | "UR" | "Ur" | "uR"
| "b" | "B" | "br" | "Br" | "bR" | "BR"
shortstring ::= "'" shortstringitem* "'" | '"' shortstringitem* '"'
longstring ::= "'''" longstringitem* "'''" | '' longstringitem* ''
shortstringitem ::= shortstringchar | escapeseq
longstringitem ::= longstringchar | escapeseq
shortstringchar ::= <any source character except "\" or newline or the quote>
longstringchar ::= <any source character except "\">
escapeseq ::= "\" <any ASCII character>
String types
End of explanation
'Hello 😊' + ' and welcome ' + 'to Python 🐍!'
'Hello 😊' ' and welcome ' 'to Python 🐍!'
# concatenation of variables and literals
s1 = 'Python 💻!'
'Hello 😊 ' + s1
'Hello 😊 ' s1
# some more ways of concatenating strings
s2 = '--🐍Python🐍--'
s2 * 5
s1 + s2
(s1 + s2)*3
# concatenating several strings together in parentheses
s3 = ('This '
'is another way '
'to concatenate '
'several strings!')
s3
# checking for substrings in a string
'way' in s3
'python' in s3
# computing total length of the string
len(s3)
Explanation: String operations
Different ways of String concatenation
End of explanation
# creating a string
s = 'PYTHON'
s, type(s)
Explanation: String indexing and slicing
End of explanation
# depicting string indexes
for index, character in enumerate(s):
print('Character ->', character, 'has index->', index)
s[0], s[1], s[2], s[3], s[4], s[5]
s[-1], s[-2], s[-3], s[-4], s[-5], s[-6]
Explanation: String indexing
End of explanation
s[:]
s[1:4]
s[:3], s[3:]
s[-3:]
s[:3] + s[3:]
s[:3] + s[-3:]
Explanation: String slicing
End of explanation
s[::1] # no offset
s[::2] # print every 2nd character in string
Explanation: String slicing with offsets
End of explanation
# strings are immutable hence assignment throws error
s[0] = 'X'
print('Original String id:', id(s))
# creates a new string
s = 'X' + s[1:]
print(s)
print('New String id:', id(s))
Explanation: String Immutability
End of explanation
s = 'python is great'
s.capitalize()
s.upper()
s.title()
Explanation: Useful String methods
Case conversions
End of explanation
s.replace('python', 'NLP')
Explanation: String replace
End of explanation
'12345'.isdecimal()
'apollo11'.isdecimal()
Explanation: Numeric checks
End of explanation
'python'.isalpha()
'number1'.isalpha()
Explanation: Alphabet checks
End of explanation
'total'.isalnum()
'abc123'.isalnum()
'1+1'.isalnum()
Explanation: Alphanumeric checks
End of explanation
s = 'I,am,a,comma,separated,string'
s.split(',')
' '.join(s.split(','))
# stripping whitespace characters
s = ' I am surrounded by spaces '
s
s.strip()
sentences = 'Python is great. NLP is also good.'
sentences.split('.')
print('\n'.join(sentences.split('.')))
print('\n'.join([sentence.strip()
for sentence in sentences.split('.')
if sentence]))
Explanation: String splitting and joining
End of explanation
'Hello %s' %('Python!')
'Hello %s %s' %('World!', 'How are you?')
Explanation: String formatting
Simple string formatting expressions - old style
End of explanation
'We have %d %s containing %.2f gallons of %s' %(2, 'bottles', 2.5, 'milk')
'We have %d %s containing %.2f gallons of %s' %(5.21, 'jugs', 10.86763, 'juice')
Explanation: Formatting expressions with different data types - old style
End of explanation
'Hello {} {}, it is a great {} to meet you at {}'.format('Mr.', 'Jones', 'pleasure', 5)
'Hello {} {}, it is a great {} to meet you at {} o\' clock'.format('Sir', 'Arthur', 'honor', 9)
Explanation: Formatting strings using the format method - new style
End of explanation
'I have a {food_item} and a {drink_item} with me'.format(drink_item='soda', food_item='sandwich')
'The {animal} has the following attributes: {attributes}'.format(animal='dog', attributes=['lazy', 'loyal'])
Explanation: Alternative ways of using string format
End of explanation
s1 = 'Python is an excellent language'
s2 = 'I love the Python language. I also use Python to build applications at work!'
import re
pattern = 'python'
# match only returns a match if regex match is found at the beginning of the string
re.match(pattern, s1)
# pattern is in lower case hence ignore case flag helps
# in matching same pattern with different cases
re.match(pattern, s1, flags=re.IGNORECASE)
# printing matched string and its indices in the original string
m = re.match(pattern, s1, flags=re.IGNORECASE)
print('Found match {} ranging from index {} - {} in the string "{}"'.format(m.group(0),
m.start(),
m.end(), s1))
# match does not work when pattern is not there in the beginning of string s2
re.match(pattern, s2, re.IGNORECASE)
# illustrating find and search methods using the re module
re.search(pattern, s2, re.IGNORECASE)
re.findall(pattern, s2, re.IGNORECASE)
match_objs = re.finditer(pattern, s2, re.IGNORECASE)
match_objs
print("String:", s2)
for m in match_objs:
print('Found match "{}" ranging from index {} - {}'.format(m.group(0),
m.start(), m.end()))
# illustrating pattern substitution using sub and subn methods
re.sub(pattern, 'Java', s2, flags=re.IGNORECASE)
re.subn(pattern, 'Java', s2, flags=re.IGNORECASE)
# dealing with unicode matching using regexes
s = u'H\u00e8llo! this is Python 🐍'
s
re.findall(r'\w+', s)
re.findall(r"[A-Z]\w+", s, re.UNICODE)
emoji_pattern = r"['\U0001F300-\U0001F5FF'|'\U0001F600-\U0001F64F'|'\U0001F680-\U0001F6FF'|'\u2600-\u26FF\u2700-\u27BF']"
re.findall(emoji_pattern, s, re.UNICODE)
Explanation: Regular Expressions
End of explanation
from nltk.corpus import gutenberg
import matplotlib.pyplot as plt
%matplotlib inline
bible = gutenberg.open('bible-kjv.txt')
bible = bible.readlines()
bible[:5]
len(bible)
bible = list(filter(None, [item.strip('\n') for item in bible]))
bible[:5]
len(bible)
line_lengths = [len(sentence) for sentence in bible]
h = plt.hist(line_lengths)
tokens = [item.split() for item in bible]
print(tokens[:5])
total_tokens_per_line = [len(sentence.split()) for sentence in bible]
h = plt.hist(total_tokens_per_line, color='orange')
words = [word for sentence in tokens for word in sentence]
print(words[:20])
words = list(filter(None, [re.sub(r'[^A-Za-z]', '', word) for word in words]))
print(words[:20])
from collections import Counter
words = [word.lower() for word in words]
c = Counter(words)
c.most_common(10)
import nltk
stopwords = nltk.corpus.stopwords.words('english')
words = [word.lower() for word in words if word.lower() not in stopwords]
c = Counter(words)
c.most_common(10)
Explanation: Putting it all together - Basic Text Processing and Analysis
End of explanation |
14,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
Step1: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
Step3: Group age by party ID, and create a violin plot with it
Step4: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers) | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
Explanation: Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
End of explanation
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
Explanation: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
End of explanation
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
#plt.show()
def beanplot(data, plot_opts={}, jitter=False):
helper function to try out different plot options
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
plot_opts_.update(plot_opts)
sm.graphics.beanplot(data, ax=ax, labels=labels,
jitter=jitter, plot_opts=plot_opts_)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
fig = beanplot(age, jitter=True)
fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
Explanation: Group age by party ID, and create a violin plot with it:
End of explanation
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Necessary to make horizontal axis labels fit
plt.rcParams['figure.subplot.bottom'] = 0.23
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
# Group age by party ID.
age = [data.exog['age'][data.endog == id] for id in party_ID]
# Create a violin plot.
fig = plt.figure()
ax = fig.add_subplot(111)
sm.graphics.violinplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a bean plot.
fig2 = plt.figure()
ax = fig2.add_subplot(111)
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a jitter plot.
fig3 = plt.figure()
ax = fig3.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small',
'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8),
'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00',
'bean_mean_color':'#009D91'}
sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create an asymmetrical jitter plot.
ix = data.exog['income'] < 16 # incomes < $30k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_lower_income = [age[endog == id] for id in party_ID]
ix = data.exog['income'] >= 20 # incomes > $50k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_higher_income = [age[endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts['violin_fc'] = (0.5, 0.5, 0.5)
plot_opts['bean_show_mean'] = False
plot_opts['bean_show_median'] = False
plot_opts['bean_legend_text'] = 'Income < \$30k'
plot_opts['cutoff_val'] = 10
sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left',
jitter=True, plot_opts=plot_opts)
plot_opts['violin_fc'] = (0.7, 0.7, 0.7)
plot_opts['bean_color'] = '#009D91'
plot_opts['bean_legend_text'] = 'Income > \$50k'
sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right',
jitter=True, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Show all plots.
#plt.show()
Explanation: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers)
End of explanation |
14,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST with transfer learning.
First let us build a MNIST logistic regression classifier.
We will then get better feature embeddings for images by using dvd library. This involves transfer learning.
We will compare simple classifier with transfer learnt model for accuracy score.
Step1: Simple logistic Regression
Step2: Lets get VGG embeddings for train and test input images and convert them to transfer learnt space.
Step3: Model with transfer learnt features | Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=False)
img = mnist.train.images[123]
img = np.reshape(img,(28,28))
plt.imshow(img, cmap = 'gray')
plt.show()
img = np.reshape(img,(28,28,1))
print img.shape, 'label = ', mnist.train.labels[123]
from dvd import dvd
img_embedding = dvd.get_embedding_x(img)
print img_embedding.shape
Explanation: MNIST with transfer learning.
First let us build a MNIST logistic regression classifier.
We will then get better feature embeddings for images by using dvd library. This involves transfer learning.
We will compare simple classifier with transfer learnt model for accuracy score.
End of explanation
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(mnist.train.images, mnist.train.labels)
preds = clf.predict(mnist.test.images)
print accuracy_score(preds, mnist.test.labels)
Explanation: Simple logistic Regression
End of explanation
train = np.reshape(mnist.train.images, (mnist.train.images.shape[0],28,28))
print 'initial training shape = ', train.shape
train = dvd.get_embedding_X(train)
print 'training shape after embedding =', train.shape
test = np.reshape(mnist.test.images, (mnist.test.images.shape[0],28,28))
test = dvd.get_embedding_X(test)
Explanation: Lets get VGG embeddings for train and test input images and convert them to transfer learnt space.
End of explanation
from sklearn import linear_model
from sklearn.metrics import accuracy_score
clf = linear_model.LogisticRegression()
clf.fit(train, mnist.train.labels)
preds = clf.predict(test)
print accuracy_score(preds, mnist.test.labels)
Explanation: Model with transfer learnt features
End of explanation |
14,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CIFAR-10 Recipe
In this notebook, we will show how to train a state-of-art CIFAR-10 network with MXNet and extract feature from the network.
This example wiil cover
Network/Data definition
Multi GPU training
Model saving and loading
Prediction/Extracting Feature
Step1: First, let's make some helper function to let us build a simplified Inception Network. More details about how to composite symbol into component can be found at composite_symbol
Step2: Now we can build a network with these component factories
Step3: If we have multiple GPU, for eaxmple, 4 GPU, we can utilize them without any difficulty
Step4: Next step is declaring data iterator. The original CIFAR-10 data is 3x32x32 in binary format, we provides RecordIO format, so we can use Image RecordIO format. For more infomation about Image RecordIO Iterator, check document.
Step5: Now we can fit the model with data.
Step6: After only 1 epoch, our model is able to acheive about 65% accuracy on testset(If not, try more times).
We can save our model by calling either save or using pickle.
Step7: To load saved model, you can use pickle if the model is generated by pickle, or use load if it is generated by save
Step8: We can use the model to do prediction
Step9: From any symbol, we are able to know its internal feature_maps and bind a new model to extract that feature map | Python Code:
import mxnet as mx
import logging
import numpy as np
# setup logging
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
Explanation: CIFAR-10 Recipe
In this notebook, we will show how to train a state-of-art CIFAR-10 network with MXNet and extract feature from the network.
This example wiil cover
Network/Data definition
Multi GPU training
Model saving and loading
Prediction/Extracting Feature
End of explanation
# Basic Conv + BN + ReLU factory
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), act_type="relu"):
# there is an optional parameter ```wrokshpace``` may influece convolution performance
# default, the workspace is set to 256(MB)
# you may set larger value, but convolution layer only requires its needed but not exactly
# MXNet will handle reuse of workspace without parallelism conflict
conv = mx.symbol.Convolution(data=data, workspace=256,
num_filter=num_filter, kernel=kernel, stride=stride, pad=pad)
bn = mx.symbol.BatchNorm(data=conv)
act = mx.symbol.Activation(data = bn, act_type=act_type)
return act
# A Simple Downsampling Factory
def DownsampleFactory(data, ch_3x3):
# conv 3x3
conv = ConvFactory(data=data, kernel=(3, 3), stride=(2, 2), num_filter=ch_3x3, pad=(1, 1))
# pool
pool = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pad=(1,1), pool_type='max')
# concat
concat = mx.symbol.Concat(*[conv, pool])
return concat
# A Simple module
def SimpleFactory(data, ch_1x1, ch_3x3):
# 1x1
conv1x1 = ConvFactory(data=data, kernel=(1, 1), pad=(0, 0), num_filter=ch_1x1)
# 3x3
conv3x3 = ConvFactory(data=data, kernel=(3, 3), pad=(1, 1), num_filter=ch_3x3)
#concat
concat = mx.symbol.Concat(*[conv1x1, conv3x3])
return concat
Explanation: First, let's make some helper function to let us build a simplified Inception Network. More details about how to composite symbol into component can be found at composite_symbol
End of explanation
data = mx.symbol.Variable(name="data")
conv1 = ConvFactory(data=data, kernel=(3,3), pad=(1,1), num_filter=96, act_type="relu")
in3a = SimpleFactory(conv1, 32, 32)
in3b = SimpleFactory(in3a, 32, 48)
in3c = DownsampleFactory(in3b, 80)
in4a = SimpleFactory(in3c, 112, 48)
in4b = SimpleFactory(in4a, 96, 64)
in4c = SimpleFactory(in4b, 80, 80)
in4d = SimpleFactory(in4c, 48, 96)
in4e = DownsampleFactory(in4d, 96)
in5a = SimpleFactory(in4e, 176, 160)
in5b = SimpleFactory(in5a, 176, 160)
pool = mx.symbol.Pooling(data=in5b, pool_type="avg", kernel=(7,7), name="global_avg")
flatten = mx.symbol.Flatten(data=pool)
fc = mx.symbol.FullyConnected(data=flatten, num_hidden=10)
softmax = mx.symbol.SoftmaxOutput(name='softmax',data=fc)
# If you'd like to see the network structure, run the plot_network function
#mx.viz.plot_network(symbol=softmax,node_attrs={'shape':'oval','fixedsize':'false'})
# We will make model with current current symbol
# For demo purpose, this model only train 1 epoch
# We will use the first GPU to do training
num_epoch = 1
model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch,
learning_rate=0.05, momentum=0.9, wd=0.00001)
# we can add learning rate scheduler to the model
# model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch,
# learning_rate=0.05, momentum=0.9, wd=0.00001,
# lr_scheduler=mx.misc.FactorScheduler(2))
# In this example. learning rate will be reduced to 0.1 * previous learning rate for every 2 epochs
Explanation: Now we can build a network with these component factories
End of explanation
# num_devs = 4
# model = mx.model.FeedForward(ctx=[mx.gpu(i) for i in range(num_devs)], symbol=softmax, num_epoch = 1,
# learning_rate=0.05, momentum=0.9, wd=0.00001)
Explanation: If we have multiple GPU, for eaxmple, 4 GPU, we can utilize them without any difficulty
End of explanation
# Use utility function in test to download the data
# or manualy prepar
import sys
sys.path.append("../../tests/python/common") # change the path to mxnet's tests/
import get_data
get_data.GetCifar10()
# After we get the data, we can declare our data iterator
# The iterator will automatically create mean image file if it doesn't exist
batch_size = 128
total_batch = 50000 / 128 + 1
# Train iterator make batch of 128 image, and random crop each image into 3x28x28 from original 3x32x32
train_dataiter = mx.io.ImageRecordIter(
shuffle=True,
path_imgrec="data/cifar/train.rec",
mean_img="data/cifar/cifar_mean.bin",
rand_crop=True,
rand_mirror=True,
data_shape=(3,28,28),
batch_size=batch_size,
preprocess_threads=1)
# test iterator make batch of 128 image, and center crop each image into 3x28x28 from original 3x32x32
# Note: We don't need round batch in test because we only test once at one time
test_dataiter = mx.io.ImageRecordIter(
path_imgrec="data/cifar/test.rec",
mean_img="data/cifar/cifar_mean.bin",
rand_crop=False,
rand_mirror=False,
data_shape=(3,28,28),
batch_size=batch_size,
round_batch=False,
preprocess_threads=1)
Explanation: Next step is declaring data iterator. The original CIFAR-10 data is 3x32x32 in binary format, we provides RecordIO format, so we can use Image RecordIO format. For more infomation about Image RecordIO Iterator, check document.
End of explanation
model.fit(X=train_dataiter,
eval_data=test_dataiter,
eval_metric="accuracy",
batch_end_callback=mx.callback.Speedometer(batch_size))
# if we want to save model after every epoch, we can add check_point call back
# model_prefix = './cifar_'
# model.fit(X=train_dataiter,
# eval_data=test_dataiter,
# eval_metric="accuracy",
# batch_end_callback=mx.helper.Speedometer(batch_size),
# epoch_end_callback=mx.callback.do_checkpoint(model_prefix))
Explanation: Now we can fit the model with data.
End of explanation
# using pickle
import pickle
smodel = pickle.dumps(model)
# using saving (recommended)
# We get the benefit being able to directly load/save from cloud storage(S3, HDFS)
prefix = "cifar10"
model.save(prefix)
Explanation: After only 1 epoch, our model is able to acheive about 65% accuracy on testset(If not, try more times).
We can save our model by calling either save or using pickle.
End of explanation
# use pickle
model2 = pickle.loads(smodel)
# using load method (able to load from S3/HDFS directly)
model3 = mx.model.FeedForward.load(prefix, num_epoch, ctx=mx.gpu())
Explanation: To load saved model, you can use pickle if the model is generated by pickle, or use load if it is generated by save
End of explanation
prob = model3.predict(test_dataiter)
logging.info('Finish predict...')
# Check the accuracy from prediction
test_dataiter.reset()
# get label
# Because the iterator pad each batch same shape, we want to remove paded samples here
y_batch = []
for dbatch in test_dataiter:
label = dbatch.label[0].asnumpy()
pad = test_dataiter.getpad()
real_size = label.shape[0] - pad
y_batch.append(label[0:real_size])
y = np.concatenate(y_batch)
# get prediction label from
py = np.argmax(prob, axis=1)
acc1 = float(np.sum(py == y)) / len(y)
logging.info('final accuracy = %f', acc1)
Explanation: We can use the model to do prediction
End of explanation
# Predict internal featuremaps
# From a symbol, we are able to get all internals. Note it is still a symbol
internals = softmax.get_internals()
# We get get an internal symbol for the feature.
# By default, the symbol is named as "symbol_name + _output"
# in this case we'd like to get global_avg" layer's output as feature, so its "global_avg_output"
# You may call ```internals.list_outputs()``` to find the target
# but we strongly suggests set a special name for special symbol
fea_symbol = internals["global_avg_output"]
# Make a new model by using an internal symbol. We can reuse all parameters from model we trained before
# In this case, we must set ```allow_extra_params``` to True
# Because we don't need params of FullyConnected Layer
feature_extractor = mx.model.FeedForward(ctx=mx.gpu(), symbol=fea_symbol,
arg_params=model.arg_params,
aux_params=model.aux_params,
allow_extra_params=True)
# Predict as normal
global_pooling_feature = feature_extractor.predict(test_dataiter)
print(global_pooling_feature.shape)
Explanation: From any symbol, we are able to know its internal feature_maps and bind a new model to extract that feature map
End of explanation |
14,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
14,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Portfolio Optimization using Quandl, Bokeh and Gurobi
Borrowed and updated from Michael C. Grant, Continuum Analytics
Step1: First of all, we need some data to proceed. For that purpose we use Quandl. First, you're going to need the quandl package. This isn't totally necessary, as pulling from the API is quite simple with or without the package, but it does make it a bit easier and knocks out a few steps. The Quandl package can be downloaded here. If we set up quandl, next thing to do is to choose some stocks to import. The following is a random selection of stocks.
Step2: The command to import those stocks is quandl.get(). With trim_start and trim_end we can choose a desired time horizon.
Step3: Let's now calculate the growth rates and some stats
Step4: As we move towards our Markowitz portfolio designs it makes sense to view the stocks on a mean/variance scatter plot.
Step5: Gurobi
Time to bring in the big guns. Expressed in mathematical terms, we will be solving models in this form
Step6: Minimum Risk Model
We have set our objective to minimize risk, and fixed our budget at 1. The model we solved above gave us the minimum risk model.
Step7: The efficient frontier
Now what we're going to do is sweep our return target over a range of values, starting at the smallest possible value to the largest. For each, we construct the minimum-risk portfolio. This will give us a tradeoff curve that is known in the business as the efficient frontier or the Pareto-optimal curve.
Note that we're using the same model object we've already constructed! All we have to do is set the return target and re-optimize for each value of interest. | Python Code:
import pandas as pd
import numpy as np
from math import sqrt
import sys
from bokeh.plotting import figure, show, ColumnDataSource, save
from bokeh.models import Range1d, HoverTool
from bokeh.io import output_notebook, output_file
import quandl
from gurobipy import *
# output_notebook() #To enable Bokeh output in notebook, uncomment this line
Explanation: Portfolio Optimization using Quandl, Bokeh and Gurobi
Borrowed and updated from Michael C. Grant, Continuum Analytics
End of explanation
APIToken = "xxx-xxxxxx"
quandlcodes = ["GOOG/NASDAQ_AAPL.4","WIKI/GOOGL.4", "GOOG/NASDAQ_CSCO.4","GOOG/NASDAQ_FB.4",
"GOOG/NASDAQ_MSFT.4","GOOG/NASDAQ_TSLA.4","GOOG/NASDAQ_YHOO.4","GOOG/PINK_CSGKF.4",
"YAHOO/F_EOAN.4","YAHOO/F_BMW.4","YAHOO/F_ADS.4","GOOG/NYSE_ABB.4","GOOG/VTX_ADEN.4",
"GOOG/VTX_NOVN.4","GOOG/VTX_HOLN.4","GOOG/NYSE_UBS.4", "GOOG/NYSE_SAP.4", "YAHOO/SW_SNBN.4",
"YAHOO/IBM.4", "YAHOO/RIG.4" , "YAHOO/CTXS.4", "YAHOO/INTC.4","YAHOO/KO.4",
"YAHOO/NKE.4","YAHOO/MCD.4","YAHOO/EBAY.4","GOOG/VTX_NESN.4","YAHOO/MI_ALV.4","YAHOO/AXAHF.4",
"GOOG/VTX_SREN.4"]
Explanation: First of all, we need some data to proceed. For that purpose we use Quandl. First, you're going to need the quandl package. This isn't totally necessary, as pulling from the API is quite simple with or without the package, but it does make it a bit easier and knocks out a few steps. The Quandl package can be downloaded here. If we set up quandl, next thing to do is to choose some stocks to import. The following is a random selection of stocks.
End of explanation
data = quandl.get(quandlcodes,authtoken=APIToken, trim_start='2009-01-01', trim_end='2016-11-09', paginate=True, per_end_date={'gte': '2009-01-01'},
qopts={'columns':['ticker', 'per_end_date']})
Explanation: The command to import those stocks is quandl.get(). With trim_start and trim_end we can choose a desired time horizon.
End of explanation
GrowthRates = data.pct_change()*100
syms = GrowthRates.columns
Sigma = GrowthRates.cov()
stats = pd.concat((GrowthRates.mean(),GrowthRates.std()),axis=1)
stats.columns = ['Mean_return', 'Volatility']
extremes = pd.concat((stats.idxmin(),stats.min(),stats.idxmax(),stats.max()),axis=1)
extremes.columns = ['Minimizer','Minimum','Maximizer','Maximum']
stats
Explanation: Let's now calculate the growth rates and some stats:
End of explanation
fig = figure(tools="pan,box_zoom,reset,resize")
source = ColumnDataSource(stats)
hover = HoverTool(tooltips=[('Symbol','@index'),('Volatility','@Volatility'),('Mean return','@Mean_return')])
fig.add_tools(hover)
fig.circle('Volatility', 'Mean_return', size=5, color='maroon', source=source)
fig.text('Volatility', 'Mean_return', syms, text_font_size='10px', x_offset=3, y_offset=-2, source=source)
fig.xaxis.axis_label='Volatility (standard deviation)'
fig.yaxis.axis_label='Mean return'
output_file("portfolio.html")
show(fig)
Explanation: As we move towards our Markowitz portfolio designs it makes sense to view the stocks on a mean/variance scatter plot.
End of explanation
# Instantiate our model
m = Model("portfolio")
# Create one variable for each stock
portvars = [m.addVar(name=symb,lb=0.0) for symb in syms]
portvars[7]=m.addVar(name='GOOG/PINK_CSGKF - Close',lb=0.0,ub=0.5)
portvars = pd.Series(portvars, index=syms)
portfolio = pd.DataFrame({'Variables':portvars})
# Commit the changes to the model
m.update()
# The total budget
p_total = portvars.sum()
# The mean return for the portfolio
p_return = stats['Mean_return'].dot(portvars)
# The (squared) volatility of the portfolio
p_risk = Sigma.dot(portvars).dot(portvars)
# Set the objective: minimize risk
m.setObjective(p_risk, GRB.MINIMIZE)
# Fix the budget
m.addConstr(p_total, GRB.EQUAL, 1)
# Select a simplex algorithm (to ensure a vertex solution)
m.setParam('Method', 1)
m.optimize()
Explanation: Gurobi
Time to bring in the big guns. Expressed in mathematical terms, we will be solving models in this form:
$$\begin{array}{lll}
\text{minimize} & x^T \Sigma x \
\text{subject to} & \sum_i x_i = 1 & \text{fixed budget} \
& r^T x = \gamma & \text{fixed return} \
& x \geq 0
\end{array}$$
In this model, the optimization variable $x\in\mathbb{R}^N$ is a vector representing the fraction of the budget allocated to each stock; that is, $x_i$ is the amount allocated to stock $i$. The paramters of the model are the mean returns $r$, a covariance matrix $\Sigma$, and the target return $\gamma$. What we will do is sweep $\gamma$ between the worst and best returns we have seen above, and compute the portfolio that achieves the target return but with as little risk as possible.
The covariance matrix $\Sigma$ merits some explanation. Along the diagonal, it contains the squares of the volatilities (standard deviations) computed above. But off the diagonal, it contains measures of the correlation between two stocks: that is, whether they tend to move in the same direction (positive correlation), in opposite directions (negative correlation), or a mixture of both (small correlation). This entire matrix is computed with a single call to Pandas.
Building the base model
We are not solving just one model here, but literally hundreds of them, with different return targets and with the short constraints added or removed. One very nice feature of the Gurobi Python interface is that we can build a single "base" model, and reuse it for each of these scenarios by adding and removing constraints.
First, let's initialize the model and declare the variables. As we mentioned above, we're creating separate variables for the long and short positions. We put these variables into a Pandas DataFrame for easy organization, and create a third column that holds the difference between the long and short variables---that is, the net allocations for each stock. Another nice feature of Gurobi's Python interface is that the variable objects can be used in simple linear and quadratic expressions using familar Python syntax.
End of explanation
portfolio['Minimum risk'] = portvars.apply(lambda x:x.getAttr('x'))
portfolio
# Add the return target
ret50 = 0.5 * extremes.loc['Mean_return','Maximum']
fixreturn = m.addConstr(p_return, GRB.EQUAL, ret50)
m.optimize()
portfolio['50% Max'] = portvars.apply(lambda x:x.getAttr('x'))
Explanation: Minimum Risk Model
We have set our objective to minimize risk, and fixed our budget at 1. The model we solved above gave us the minimum risk model.
End of explanation
m.setParam('OutputFlag',False)
# Determine the range of returns. Make sure to include the lowest-risk
# portfolio in the list of options
minret = extremes.loc['Mean_return','Minimum']
maxret = extremes.loc['Mean_return','Maximum']
riskret = extremes.loc['Volatility','Minimizer']
riskret = stats.loc[riskret,'Mean_return']
riskret =sum(portfolio['Minimum risk']*stats['Mean_return'])
returns = np.unique(np.hstack((np.linspace(minret,maxret,10000),riskret)))
# Iterate through all returns
risks = returns.copy()
for k in range(len(returns)):
fixreturn.rhs = returns[k]
m.optimize()
risks[k] = sqrt(p_risk.getValue())
fig = figure(tools="pan,box_zoom,reset,resize")
# Individual stocks
fig.circle(stats['Volatility'], stats['Mean_return'], size=5, color='maroon')
fig.text(stats['Volatility'], stats['Mean_return'], syms, text_font_size='10px', x_offset=3, y_offset=-2)
fig.circle('Volatility', 'Mean_return', size=5, color='maroon', source=source)
# Divide the efficient frontier into two sections: those with
# a return less than the minimum risk portfolio, those that are greater.
tpos_n = returns >= riskret
tneg_n = returns <= riskret
fig.line(risks[tneg_n], returns[tneg_n], color='red')
fig.line(risks[tpos_n], returns[tpos_n], color='blue')
fig.xaxis.axis_label='Volatility (standard deviation)'
fig.yaxis.axis_label='Mean return'
fig.legend.orientation='bottom_left'
output_file("efffront.html")
show(fig)
Explanation: The efficient frontier
Now what we're going to do is sweep our return target over a range of values, starting at the smallest possible value to the largest. For each, we construct the minimum-risk portfolio. This will give us a tradeoff curve that is known in the business as the efficient frontier or the Pareto-optimal curve.
Note that we're using the same model object we've already constructed! All we have to do is set the return target and re-optimize for each value of interest.
End of explanation |
14,129 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython
Step1: Getting help
Step2: Typing object_name? will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
Step3: An IPython quick reference card
Step4: Tab completion
Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type object_name.<TAB> to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.
Step5: The interactive workflow
Step6: You can suppress the storage and rendering of output if you append ; to the last cell (this comes in handy when plotting with matplotlib, for example)
Step7: The output is stored in _N and Out[N] variables
Step8: And the last three have shorthands for convenience
Step9: Exercise
Write the last 10 lines of history to a file named log.py.
Accessing the underlying operating system
Step10: Note that all this is available even in multiline blocks
Step11: Beyond Python
Step12: Line vs cell magics
Step13: Line magics can be used even inside code blocks
Step14: Magics can do anything they want with their input, so it doesn't have to be valid Python
Step15: Another interesting cell magic
Step16: Let's see what other magics are currently defined in the system
Step17: Running normal Python code
Step18: And when your code produces errors, you can control how they are displayed with the %xmode magic
Step19: Now let's call the function g with an argument that would produce an error
Step20: The default %xmode is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session.
Step21: Running code in other languages with special %% magics
Step22: Exercise
Write a cell that executes in Bash and prints your current working directory as well as the date.
Apologies to Windows users who may not have Bash available, not sure how to obtain the equivalent result with cmd.exe or Powershell.
Step23: Raw Input in the notebook
Since 1.0 the IPython notebook web application support raw_input which for example allow us to invoke the %debug magic in the notebook
Step24: Don't foget to exit your debugging session. Raw input can of course be use to ask for user input
Step25: Plotting in the notebook
This magic configures matplotlib to render its figures inline
Step26: The IPython kernel/client model
Step27: We can connect automatically a Qt Console to the currently running kernel with the %qtconsole magic, or by typing ipython console --existing <kernel-UUID> in any terminal | Python Code:
print("Hi")
Explanation: IPython: beyond plain Python
When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient.
First things first: running code, getting help
In the notebook, to run a cell of code, hit Shift-Enter. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:
Alt-Enter to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).
Control-Enter executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
End of explanation
?
Explanation: Getting help:
End of explanation
import collections
collections.namedtuple?
collections.Counter??
*int*?
Explanation: Typing object_name? will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.
End of explanation
%quickref
Explanation: An IPython quick reference card:
End of explanation
collections.
Explanation: Tab completion
Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type object_name.<TAB> to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names.
End of explanation
2+10
_+10
Explanation: The interactive workflow: input, output, history
End of explanation
10+20;
_
Explanation: You can suppress the storage and rendering of output if you append ; to the last cell (this comes in handy when plotting with matplotlib, for example):
End of explanation
_10 == Out[10]
Explanation: The output is stored in _N and Out[N] variables:
End of explanation
print('last output:', _)
print('next one :', __)
print('and next :', ___)
In[11]
_i
_ii
print('last input:', _i)
print('next one :', _ii)
print('and next :', _iii)
%history -n 1-5
Explanation: And the last three have shorthands for convenience:
End of explanation
!pwd
files = !ls
print("My current directory's files:")
print(files)
!echo {files[0].upper()}
Explanation: Exercise
Write the last 10 lines of history to a file named log.py.
Accessing the underlying operating system
End of explanation
import os
for i,f in enumerate(files):
if f.endswith('ipynb'):
!echo {"%02d" % i} - "{os.path.splitext(f)[0]}"
else:
print('--')
Explanation: Note that all this is available even in multiline blocks:
End of explanation
%magic
Explanation: Beyond Python: magic functions
The IPython 'magic' functions are a set of commands, invoked by prepending one or two % signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with -- and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold:
To provide an orthogonal namespace for controlling IPython itself and exposing other system-oriented functionality.
To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands.
End of explanation
%timeit range(10)
%%timeit
range(10)
range(100)
Explanation: Line vs cell magics:
End of explanation
for i in range(5):
size = i*100
print('size:',size)
%timeit range(size)
Explanation: Line magics can be used even inside code blocks:
End of explanation
%%bash
echo "My shell is:" $SHELL
echo "My memory status is:"
free
Explanation: Magics can do anything they want with their input, so it doesn't have to be valid Python:
End of explanation
%%writefile test.txt
This is a test file!
It can contain anything I want...
And more...
!cat test.txt
Explanation: Another interesting cell magic: create any file you want locally from the notebook:
End of explanation
%lsmagic
Explanation: Let's see what other magics are currently defined in the system:
End of explanation
>>> # Fibonacci series:
... # the sum of two elements defines the next
... a, b = 0, 1
>>> while b < 10:
... print(b)
... a, b = b, a+b
In [1]: for i in range(10):
...: print(i)
...:
Explanation: Running normal Python code: execution and errors
Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session:
End of explanation
%%writefile mod.py
def f(x):
return 1.0/(x-1)
def g(y):
return f(y+1)
Explanation: And when your code produces errors, you can control how they are displayed with the %xmode magic:
End of explanation
import mod
mod.g(0)
%xmode plain
mod.g(0)
%xmode verbose
mod.g(0)
Explanation: Now let's call the function g with an argument that would produce an error:
End of explanation
%xmode context
Explanation: The default %xmode is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session.
End of explanation
%%perl
@months = ("July", "August", "September");
print $months[0];
%%ruby
name = "world"
puts "Hello #{name.capitalize}!"
Explanation: Running code in other languages with special %% magics
End of explanation
%load ../../exercises/soln/bash-script
Explanation: Exercise
Write a cell that executes in Bash and prints your current working directory as well as the date.
Apologies to Windows users who may not have Bash available, not sure how to obtain the equivalent result with cmd.exe or Powershell.
End of explanation
mod.g(0)
%debug
Explanation: Raw Input in the notebook
Since 1.0 the IPython notebook web application support raw_input which for example allow us to invoke the %debug magic in the notebook:
End of explanation
enjoy = input('Are you enjoying this tutorial ?')
print('enjoy is :', enjoy)
Explanation: Don't foget to exit your debugging session. Raw input can of course be use to ask for user input:
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 300)
y = np.sin(x**2)
plt.plot(x, y)
plt.title("A little chirp")
fig = plt.gcf() # let's keep the figure object around for later...
Explanation: Plotting in the notebook
This magic configures matplotlib to render its figures inline:
End of explanation
%connect_info
Explanation: The IPython kernel/client model
End of explanation
%qtconsole
Explanation: We can connect automatically a Qt Console to the currently running kernel with the %qtconsole magic, or by typing ipython console --existing <kernel-UUID> in any terminal:
End of explanation |
14,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
Reference Documents <A id='ref'></A>
<OL>
<LI> <A HREF="http
Step1: What is Matplotlib?
matplotlib is a library for making <B>2D plots</B> of arrays in Python. It is capable of producing
sophisticated 2D plots with high quality outputs.
matplotlib has some 3D plotting capabilities through <A HREF="http
Step2: Useful Syntax
Data for plotting
$t\in[0,4\pi]$, $s_1=\cos(x)$, $s_2=\frac{1}{2}\cos(x)+\frac{1}{2}\cos(3 x)$
Step3: Two plots on the same axes with labels
Step4: Change linestyle
<A HREF="http
Step5: Add Legend
can use LaTeX in text
Step6: Change Axes Limits
Step7: Save Figure to File
<B><I>savefig(filename.ext)</I></B> saves current figure into a file (<A HREF="http
Step8: Multiple Figures on the Same Plot
<B><I>subplot(nrows, ncols, plot_number)</I></B> creates a subplot axes positioned by the given grid definition (<A HREF="http
Step9: Annotating Text
Default options
Step10: Fine controll of text and arrow appearance
Step11: Plot with Fill
Step12: Histogram
Step13: Log Plot
Step14: Pie Chart
Step15: Contour Plot and Colorbar
Step16: Using Matplotlib Gallery
If you do not know how to make a specific plot, the best place to go is <A HREF="http
Step17: Matplotlib API
Step18: 3D plotting with mplot3d | Python Code:
from IPython.display import YouTubeVideo
#YouTubeVideo("https://www.youtube.com/watch?v=P7SVi0YTIuE")
YouTubeVideo("P7SVi0YTIuE")
Explanation: Matplotlib
Reference Documents <A id='ref'></A>
<OL>
<LI> <A HREF="http://matplotlib.org/">Homepage of Matplotlib</A>
<LI> <A HREF="http://matplotlib.org/api/pyplot_summary.html">Matplotlib: Pyplot command summary</A>
<LI> <A HREF="http://matplotlib.org/resources/index.html#books-chapters-and-articles">Matplotlib: Books, videos, tutorials</A>
<LI> <A HREF="http://matplotlib.org/gallery.html">Matplotlib: Gallery</A>
</OL>
Recommended Tutorials
<A HREF="http://www.labri.fr/perso/nrougier/teaching/matplotlib/matplotlib.html">Matplotlib Tutorial</A> by Nicolas P. Rougier
A slightly old (but not obsolete!) video by Mike Müller from 2012 with introduction to plotting capabilities of matplotlib [~2 hours]
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pylab as plt
x = [2, 3, 5, 7, 11,13,16]
y = [4, 9, 5, 9, 1,3,4.5]
plt.plot(x, y)
Explanation: What is Matplotlib?
matplotlib is a library for making <B>2D plots</B> of arrays in Python. It is capable of producing
sophisticated 2D plots with high quality outputs.
matplotlib has some 3D plotting capabilities through <A HREF="http://matplotlib.org/mpl_toolkits/mplot3d/index.html">mplot3d toolkit</A>, but for sofisticated 3D visualization
it is better to use different packages
matplotlib is conceptually divided into three parts:
<UL>
<LI> the <B><I>pylab</I></B> interface is the <I>set of functions</I> provided by matplotlib.pylab which allow the user to create plots with code quite similar to MATLAB figure generating code.
<LI> The <B><I>matplotlib API</I></B> is the <I>set of classes</I> that do the heavy lifting, creating and managing figures, text, lines, plots and so on.
<LI> The <B><I>backends</I></B> are device-dependent drawing devices, aka renderers, that transform the frontend representation to hardcopy or a display device - not covered in this tutorial
</UL>
Pyplot interface
Simple Plot
End of explanation
import math
t = np.linspace(0, 4*np.pi, 100)
s1 = np.cos(t)
s2 = 0.5*np.cos(t) + 0.25*np.cos(3*t)
Explanation: Useful Syntax
Data for plotting
$t\in[0,4\pi]$, $s_1=\cos(x)$, $s_2=\frac{1}{2}\cos(x)+\frac{1}{2}\cos(3 x)$
End of explanation
plt.plot(t, s1)
plt.plot(t, s2)
# add labels
plt.xlabel('time (s)')
plt.ylabel('voltage (mV)')
plt.title('About as simple as it gets, folks')
# add grid
plt.grid(True)
Explanation: Two plots on the same axes with labels
End of explanation
# short form of linestyle specification
plt.plot(t, s1,'k-', linewidth=1)
# long form of linestyle specification
plt.plot(t, s2, linestyle='dashed',color='r', linewidth=1)
plt.xlabel('time (s)')
plt.ylabel('voltage (mV)')
plt.title('About as simple as it gets, folks')
plt.grid(True)
Explanation: Change linestyle
<A HREF="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot">help</A> for <I>plot()</I>
function - line style options listed here
End of explanation
x = np.linspace(0, 2*np.pi, 300)
y1 = np.sin(x)
y2 = np.sin(x**2)
# add labels to the curves, can use LaTeX
plt.plot(x, y1, label=r'$\sin(x)$')
plt.plot(x, y2, label=r'$\sin(x^2)$')
plt.title('functions: $\sin(x)$ and $\sin(x^2)$')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
# add legend; loc=0 "best" placement
plt.legend(loc=0)
Explanation: Add Legend
can use LaTeX in text
End of explanation
# add labels to the curves, can use LaTeX
plt.plot(x, y1, label=r'$\sin(x)$')
plt.plot(x, y2, label=r'$\sin(x^2)$')
plt.title('functions: $\sin(x)$ and $\sin(x^2)$')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
# change axes limits
plt.xlim(-0.1,6.5)
plt.ylim(-1.1,1.1)
# add legend; loc=0 "best" placement
plt.legend(loc=0)
Explanation: Change Axes Limits
End of explanation
# add labels to the curves, can use LaTeX
plt.plot(x, y1, label=r'$\sin(x)$')
plt.plot(x, y2, label=r'$\sin(x^2)$')
plt.title('functions: $\sin(x)$ and $\sin(x^2)$')
plt.xlabel('x')
plt.ylabel('y')
plt.grid()
# change axes limits
plt.xlim(-0.1,6.5)
plt.ylim(-1.1,1.1)
# add legend; loc=0 "best" placement
plt.legend(loc=0)
# save figure to a PDF file
plt.savefig('sin_legend_plot.pdf')
Explanation: Save Figure to File
<B><I>savefig(filename.ext)</I></B> saves current figure into a file (<A HREF="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig">help</A>)
End of explanation
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
# create figure
plt.figure(1)
# first subplot - 2,1,1: 2 rows, 1 column, first plot
plt.subplot(2,1,1)
# plot both data with a single plot function call
# first plot filled circles, then solid black line
plt.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
# can use a simpler form, without commas - 212: 2 rows, 1 column, second plot
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
Explanation: Multiple Figures on the Same Plot
<B><I>subplot(nrows, ncols, plot_number)</I></B> creates a subplot axes positioned by the given grid definition (<A HREF="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot">help</A>)
End of explanation
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
# use 'lw' instead of 'linewidth'
plt.plot(t, s, lw=1.5)
# put annotation on to plot
plt.annotate('local max of $\cos(t)$', \
xy=(2, 1), \
xytext=(3, 1.5),\
arrowprops=dict(facecolor='black'))
# change y limits for y-axis
plt.ylim(-2,2);
Explanation: Annotating Text
Default options
End of explanation
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
# use 'lw' instead of 'linewidth'
plt.plot(t, s, lw=1.5)
# put annotation on to plot
# fine control of text and arrow properties
plt.annotate('local max of $\cos(t)$', \
xy=(2, 1), \
xytext=(3, 1.5), \
arrowprops=dict(facecolor='black', shrink=0.05,width=1,headwidth=6), \
fontsize='x-large',fontname="serif")
# change y limits for y-axis
plt.ylim(-2,2);
Explanation: Fine controll of text and arrow appearance
End of explanation
t = np.arange(0.0, 1.01, 0.01)
s = np.sin(2*2*np.pi*t)
plt.fill(t, s*np.exp(-5*t), 'r')
plt.grid(True)
Explanation: Plot with Fill
End of explanation
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
# alpha - is the transparency of the plot
n, bins, patches = plt.hist(x,50, normed=True, facecolor='g', alpha=.65)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(50, .0275, r'$\mu=100,\ \sigma=\frac{1}{\sqrt{15}}$', fontsize='x-large')
plt.axis([40, 160, 0, 0.031]);
Explanation: Histogram
End of explanation
plt.subplots_adjust(hspace=0.4)
t = np.arange(0.01, 20.0, 0.01)
# log y axis
plt.subplot(221)
plt.semilogy(t, np.exp(-t/5.0))
plt.title('semilogy')
plt.grid(True)
# log x axis
plt.subplot(222)
plt.semilogx(t, np.sin(2*np.pi*t))
plt.ylim(-1.1,1.1)
plt.title('semilogx')
plt.grid(True)
# log x and y axis
plt.subplot(223)
plt.loglog(t, 20*np.exp(-t/10.0), basex=2)
plt.grid(True)
plt.title('loglog base 4 on x')
# with errorbars: clip non-positive values
plt.subplot(224)
# set logarithmic axes
plt.xscale("log", nonposx='clip')
plt.yscale("log", nonposy='clip')
x = 10.0**np.linspace(0.0, 2.0, 20)
y = x**2.0
plt.errorbar(x, y, xerr=0.1*x, yerr=5.0+0.75*y)
plt.ylim(ymin=0.1)
plt.title('Errorbars go negative')
# will need it later
fig_log_plots=plt.gcf()
Explanation: Log Plot
End of explanation
# make a square figure and axes
plt.figure(1, figsize=(6,6))
plt.axes([0.1, 0.1, 0.8, 0.8])
labels = 'Frogs', 'Hogs', 'Dogs', 'Logs'
fracs = [15,30,45, 10]
explode=(0, 0.1, 0, 0)
plt.pie(fracs, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True)
plt.title('Raining Hogs and Dogs', bbox={'facecolor':'0.9', 'pad':5})
Explanation: Pie Chart
End of explanation
def f(x,y):
return (1-x/2+x**5+y**3)*np.exp(-x**2-y**2)
n = 256
x = np.linspace(-3,3,n)
y = np.linspace(-3,3,n)
X,Y = np.meshgrid(x,y)
plt.axes([0.025,0.025,0.95,0.95])
plt.contourf(X, Y, f(X,Y), 10, alpha=.75, cmap=plt.cm.hot)
plt.colorbar()
C = plt.contour(X, Y, f(X,Y), 10, colors='black', linewidth=.5)
plt.clabel(C, inline=1, fontsize=10)
plt.xticks([])
plt.yticks([])
Explanation: Contour Plot and Colorbar
End of explanation
# I just picked one to try:
%load http://matplotlib.org/mpl_examples/api/sankey_demo_rankine.py
Explanation: Using Matplotlib Gallery
If you do not know how to make a specific plot, the best place to go is <A HREF="http://matplotlib.org/gallery.html">matplotlib gallery</A>.
It contains a large number of plots together with the Python souces files which you can download and adjust to your needs.
End of explanation
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
# create figure
fig=plt.figure(1)
# add subplot to the figure, retunrns Axes object
ax1=fig.add_subplot(2,1,1)
# plot both data with a single plot function call
# first plot filled circles, then solid black line
lines11,lines12,=ax1.plot(t1, f(t1), 'bo', t2, f(t2), 'k')
# can use a simpler form, without commas - 212: 2 rows, 1 column, second plot
ax2=fig.add_subplot(212)
lines22,=ax2.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
lines22.set_linewidth(1)
lines22.set_linestyle(':')
ax2.set_ylim(-1.5,1.5)
ax1.grid(True)
fig
fig_log_plots.savefig('log_plots.png')
fig.savefig('two_plots.png',dpi=600)
Explanation: Matplotlib API
End of explanation
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-100, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='x', offset=-40, cmap=cm.coolwarm)
cset = ax.contourf(X, Y, Z, zdir='y', offset=40, cmap=cm.coolwarm)
ax.set_xlabel('X')
ax.set_xlim(-40, 40)
ax.set_ylabel('Y')
ax.set_ylim(-40, 40)
ax.set_zlabel('Z')
ax.set_zlim(-100, 100)
plt.show()
Explanation: 3D plotting with mplot3d
End of explanation |
14,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file and display the variables names.
Step1: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
Step2: Display the histogram.
Step3: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
Step4: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
Step5: Make a histogram of <tt>parity</tt>, the number of children born by the respondent. How would you describe this distribution?
Step6: Use Hist.Largest to find the largest values of <tt>parity</tt>.
Step7: Use <tt>totincr</tt> to select the respondents with the highest income. Compute the distribution of <tt>parity</tt> for just the high income respondents.
Step8: Find the largest parities for high income respondents.
Step9: Compare the mean <tt>parity</tt> for high income respondents and others.
Step10: Investigate any other variables that look interesting. | Python Code:
%matplotlib inline
import chap01soln
resp = chap01soln.ReadFemResp()
resp.columns
Explanation: Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>
Allen Downey
Read the female respondent file and display the variables names.
End of explanation
import thinkstats2
hist = thinkstats2.Hist(resp.totincr)
Explanation: Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.
End of explanation
import thinkplot
thinkplot.Hist(hist, label='totincr')
thinkplot.Show()
Explanation: Display the histogram.
End of explanation
#hist = thinkstats2.Hist(resp.ager)
resp.ager
#thinkplot.Hist(hist, label='ager')
#thinkplot.Show()
Explanation: Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.
End of explanation
hist = thinkstats2.Hist(resp.numfmhh)
thinkplot.Hist(hist, label='humfmhh')
thinkplot.Show()
Explanation: Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.
End of explanation
hist = thinkstats2.Hist(resp.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Show()
Explanation: Make a histogram of <tt>parity</tt>, the number of children born by the respondent. How would you describe this distribution?
End of explanation
hist.Largest(10)
Explanation: Use Hist.Largest to find the largest values of <tt>parity</tt>.
End of explanation
rich = resp[resp.totincr == 14]
hist = thinkstats2.Hist(rich.parity)
thinkplot.Hist(hist, label='parity')
thinkplot.Show()
Explanation: Use <tt>totincr</tt> to select the respondents with the highest income. Compute the distribution of <tt>parity</tt> for just the high income respondents.
End of explanation
hist.Largest(10)
Explanation: Find the largest parities for high income respondents.
End of explanation
notrich = resp[resp.totincr < 14]
rich.parity.mean(), notrich.parity.mean()
Explanation: Compare the mean <tt>parity</tt> for high income respondents and others.
End of explanation
hist = thinkstats2.Hist(resp.fmarno)
thinkplot.Hist(hist, label='fmarno')
thinkplot.Show()
Explanation: Investigate any other variables that look interesting.
End of explanation |
14,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Higgs data set
URL
Step1: As done in previous notebook, create RDDs from raw data and build Gradient boosting and Random forests models. Consider doing 1% sampling since the dataset is too big for your local machine | Python Code:
#define feature names
feature_text='lepton pT, lepton eta, lepton phi, missing energy magnitude, missing energy phi, jet 1 pt, jet 1 eta, jet 1 phi, jet 1 b-tag, jet 2 pt, jet 2 eta, jet 2 phi, jet 2 b-tag, jet 3 pt, jet 3 eta, jet 3 phi, jet 3 b-tag, jet 4 pt, jet 4 eta, jet 4 phi, jet 4 b-tag, m_jj, m_jjj, m_lv, m_jlv, m_bb, m_wbb, m_wwbb'
features=[strip(a) for a in split(feature_text,',')]
print len(features),features
# create a directory called higgs, download and decompress HIGGS.csv.gz into it
from os.path import exists
if not exists('higgs'):
print "creating directory higgs"
!mkdir higgs
%cd higgs
if not exists('HIGGS.csv'):
if not exists('HIGGS.csv.gz'):
print 'downloading HIGGS.csv.gz'
!curl -O http://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz
print 'decompressing HIGGS.csv.gz --- May take 5-10 minutes'
!gunzip -f HIGGS.csv.gz
!ls -l
%cd ..
Explanation: Higgs data set
URL: http://archive.ics.uci.edu/ml/datasets/HIGGS#
Abstract: This is a classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not.
Data Set Information:
The data has been produced using Monte Carlo simulations. The first 21 features (columns 2-22) are kinematic properties measured by the particle detectors in the accelerator. The last seven features are functions of the first 21 features; these are high-level features derived by physicists to help discriminate between the two classes. There is an interest in using deep learning methods to obviate the need for physicists to manually develop such features. Benchmark results using Bayesian Decision Trees from a standard physics package and 5-layer neural networks are presented in the original paper. The last 500,000 examples are used as a test set.
End of explanation
# Read the file into an RDD
# If doing this on a real cluster, you need the file to be available on all nodes, ideally in HDFS.
path='higgs/HIGGS.csv'
inputRDD=sc.textFile(path)
# Transform the text RDD into an RDD of LabeledPoints
Data=inputRDD.map(lambda line: [float(strip(x)) for x in line.split(',')])\
.map(lambda a: LabeledPoint(a[0], a[1:]))
Data1=Data.sample(False,0.01).cache()
(trainingData,testData)=Data1.randomSplit([0.7,0.3])
print 'Sizes: Data1=%d, trainingData=%d, testData=%d'%(Data1.count(),trainingData.cache().count(),testData.cache().count())
from time import time
errors={}
for depth in [1,3,6,10]:
start=time()
model=GradientBoostedTrees.trainClassifier(trainingData, categoricalFeaturesInfo={}, numIterations=10, maxDepth=depth)
#print model.toDebugString()
errors[depth]={}
dataSets={'train':trainingData,'test':testData}
for name in dataSets.keys(): # Calculate errors on train and test sets
data=dataSets[name]
Predicted=model.predict(data.map(lambda x: x.features))
LabelsAndPredictions = data.map(lambda lp: lp.label).zip(Predicted)
Err = LabelsAndPredictions.filter(lambda (v,p): v != p).count()/float(data.count())
errors[depth][name]=Err
print depth,errors[depth],int(time()-start),'seconds'
print errors
B10 = errors
# Plot Train/test accuracy vs Depth of trees graph
%pylab inline
from plot_utils import *
make_figure([B10],['10Trees'],Title='Boosting using 10% of data')
from time import time
errors={}
for depth in [1,3,6,10,15,20]:
start=time()
model = RandomForest.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={},
numTrees=10, featureSubsetStrategy="auto",
impurity='gini', maxDepth=depth)
errors[depth]={}
dataSets={'train':trainingData,'test':testData}
for name in dataSets.keys(): # Calculate errors on train and test sets
data=dataSets[name]
Predicted=model.predict(data.map(lambda x: x.features))
LabelsAndPredictions = data.map(lambda lp: lp.label).zip(Predicted)
Err = LabelsAndPredictions.filter(lambda (v,p): v != p).count()/float(data.count())
errors[depth][name]=Err
print depth,errors[depth],int(time()-start),'seconds'
print errors
RF_10trees = errors
# Plot Train/test accuracy vs Depth of trees graph
make_figure([RF_10trees],['10Trees'],Title='Random Forests using 10% of data')
make_figure([RF_10trees, B10],['10Trees', 'GB'],Title='GBT vs RF')
Explanation: As done in previous notebook, create RDDs from raw data and build Gradient boosting and Random forests models. Consider doing 1% sampling since the dataset is too big for your local machine
End of explanation |
14,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow on Vertex AI
This notebook demonstrates how to implement different image models on MNIST using the tf.keras API.
Learning Objectives
Understand how to build a Dense Neural Network (DNN) for image classification
Understand how to use dropout (DNN) for image classification
Understand how to use Convolutional Neural Networks (CNN)
Know how to deploy and use an image classifcation model using Google Cloud's Vertex AI
First things first. Configure the parameters below to match your own Google Cloud project details.
Step3: Building a dynamic model
In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on Vertex AI, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
Step6: Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
Step10: Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions
Step11: Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
Step12: Now that we know that our models are working as expected, let's run it on Google Cloud within Vertex AI. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
Step13: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnist_models/trainer/task.py file.
Step14: Training on the cloud
For this model, we will be able to use a Tensorflow pre-built container on Vertex AI, as we do not have any particular additional prerequisites. As before, we use setuptools for this, and store the created source distribution on Cloud Storage.
Step15: Then, we can kickoff the Vertex AI Custom Job using the pre-built container. We can pass our source distribution URI using the --python-package-uris flag.
Step16: Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is to upload the created model artifact from Cloud Storage to Vertex AI as a model, create a new endpoint, and deploy the model to the endpoint.
Step17: To predict with the model, let's take one of the example images.
TODO 4
Step18: Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab! | Python Code:
import os
from datetime import datetime
REGION = "us-central1"
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
Explanation: MNIST Image Classification with TensorFlow on Vertex AI
This notebook demonstrates how to implement different image models on MNIST using the tf.keras API.
Learning Objectives
Understand how to build a Dense Neural Network (DNN) for image classification
Understand how to use dropout (DNN) for image classification
Understand how to use Convolutional Neural Networks (CNN)
Know how to deploy and use an image classifcation model using Google Cloud's Vertex AI
First things first. Configure the parameters below to match your own Google Cloud project details.
End of explanation
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
Parses command line arguments and kicks off model training.
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
Explanation: Building a dynamic model
In the previous notebook, <a href="mnist_linear.ipynb">mnist_linear.ipynb</a>, we ran our code directly from the notebook. In order to run it on Vertex AI, it needs to be packaged as a python module.
The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty __init__.py (mnist_models/trainer/__init__.py) file. It still needs the model and a trainer to run it, so let's make them.
Let's start with the trainer file first. This file parses command line arguments to feed into the model.
End of explanation
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
Scales images from a 0-255 int range to a 0-1 float range
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
Explanation: Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the scale and load_dataset functions from the previous lab.
End of explanation
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
Constructs layers for a keras model based on a dict of model types.
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
Compiles keras model for image classification.
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
Compiles keras model and loads data into it for training.
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
Explanation: Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate.
TODO 1: Define the Keras layers for a DNN model
TODO 2: Define the Keras layers for a dropout model
TODO 3: Define the Keras layers for a CNN model
Hint: These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance.
End of explanation
!python3 -m mnist_models.trainer.test
Explanation: Local Training
With everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 has the number of epochs and steps per epoch respectively.
Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
End of explanation
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time
)
Explanation: Now that we know that our models are working as expected, let's run it on Google Cloud within Vertex AI. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
End of explanation
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
Explanation: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our mnist_models/trainer/task.py file.
End of explanation
%%writefile mnist_models/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='mnist_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='MNIST model training application.'
)
%%bash
cd mnist_models
python ./setup.py sdist --formats=gztar
cd ..
gsutil cp mnist_models/dist/mnist_trainer-0.1.tar.gz gs://${BUCKET}/mnist/
Explanation: Training on the cloud
For this model, we will be able to use a Tensorflow pre-built container on Vertex AI, as we do not have any particular additional prerequisites. As before, we use setuptools for this, and store the created source distribution on Cloud Storage.
End of explanation
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time
)
os.environ["JOB_NAME"] = f"mnist_{model_type}_{current_time}"
%%bash
echo $JOB_DIR $REGION $JOB_NAME
PYTHON_PACKAGE_URIS=gs://${BUCKET}/mnist/mnist_trainer-0.1.tar.gz
MACHINE_TYPE=n1-standard-4
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\
replica-count=$REPLICA_COUNT,\
executor-image-uri=$PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,\
python-module=$PYTHON_MODULE"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--python-package-uris=$PYTHON_PACKAGE_URIS \
--worker-pool-spec=$WORKER_POOL_SPEC \
--args="--job-dir=$JOB_DIR,--model_type=$MODEL_TYPE"
%%bash
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
gsutil ls $SAVEDMODEL_DIR
Explanation: Then, we can kickoff the Vertex AI Custom Job using the pre-built container. We can pass our source distribution URI using the --python-package-uris flag.
End of explanation
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=mnist_$TIMESTAMP
ENDPOINT_DISPLAYNAME=mnist_endpoint_$TIMESTAMP
IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=$SAVEDMODEL_DIR \
--format="value(model)")
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
# Deployment
DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYED_MODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=1 \
--max-replica-count=1 \
--traffic-split=0=100
Explanation: Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is to upload the created model artifact from Cloud Storage to Vertex AI as a model, create a new endpoint, and deploy the model to the endpoint.
End of explanation
import codecs
import json
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = {"instances": [test_image.reshape(HEIGHT, WIDTH, 1).tolist()]}
json.dump(jsondata, codecs.open("test.json", "w", encoding="utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
!cat test.json
Explanation: To predict with the model, let's take one of the example images.
TODO 4: Write a .json file with image data to send to a Vertex AI deployed model
End of explanation
%%bash
ENDPOINT_RESOURCENAME= # TODO: insert ENDPOINT_RESOURCENAME from above
gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \
--region=$REGION \
--json-request=test.json
Explanation: Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
End of explanation |
14,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 9
Object Oriented Programming
Monday, October 2nd 2017
Step1: Motiviation
We would like to find a way to represent complex, structured data in the context of our programming language.
For example, to represent a location, we might want to associate a name, a latitude and a longitude with it.
Thus we would want to create a compound data type which carries this information.
In C, for example, this is a struct
Step2: But things aren't hidden so I can get through the interface
Step3: Because I used a tuple, and a tuple is immutable, I can't change this complex number once it's created.
Step4: Objects thru closures
Let's try an implementation that uses a closure to capture the value of arguments.
Step5: This looks pretty good so far.
The only problem is that we don't have a way to change the real and imaginary parts.
For this, we need to add things called setters.
Objects with Setters
Step6: Python Classes and instance variables
We constructed an object system above. But Python comes with its own.
Classes allow us to define our own types in the Python type system.
Step7: __init__ is a special method run automatically by Python.
It is a constructor.
self is the instance of the object.
It acts like this in C++ but self is explicit.
Step8: Inheritance and Polymorphism
Inheritance
Inheritance is the idea that a "Cat" is-a "Animal" and a "Dog" is-a "Animal".
Animals make sounds, but Cats Meow and Dogs Bark.
Inheritance makes sure that methods not defined in a child are found and used from a parent.
Polymorphism
Polymorphism is the idea that an interface is specified, but not necessarily implemented, by a superclass and then the interface is implemented in subclasses (differently).
[Actually Polymorphism is much more complex and interesting than this, and this definition is really an outcome of polymorphism. But we'll come to this later.]
Example
Step9: Animal is the superclass (a.k.a the base class).
Dog and Cat are both subclasses (a.k.a derived classes) of the Animal superclass.
Using the Animal class
Step10: How does this all work?
Step11: Calling a superclasses initializer
Say we dont want to do all the work of setting the name variable in the subclasses.
We can set this "common" work up in the superclass and use super to call the superclass's initializer from the subclass.
There's another way to think about this
Step12: Interfaces
The above examples show inheritance and polymorphism.
Notice that we didn't actually need to set up the inheritance.
We could have just defined 2 different classes and have them both make_sound.
In Java and C++ this is done more formally through Interfaces and Abstract Base Classes, respectively, plus inheritance.
In Python, this agreement to define make_sound is called duck typing.
"If it walks like a duck and quacks like a duck, it is a duck."
Step13: The Python Data Model
Duck typing is used throughout Python. Indeed it's what enables the "Python Data Model"
All python classes implicitly inherit from the root object class.
The Pythonic way is to just document your interface and implement it.
This usage of common interfaces is pervasive in dunder functions to comprise the Python data model.
Example | Python Code:
from IPython.display import HTML
Explanation: Lecture 9
Object Oriented Programming
Monday, October 2nd 2017
End of explanation
def Complex(a, b): # constructor
return (a,b)
def real(c): # method
return c[0]
def imag(c):
return c[1]
def str_complex(c):
return "{0}+{1}i".format(c[0], c[1])
c1 = Complex(1,2) # constructor
print(real(c1), " ", str_complex(c1))
Explanation: Motiviation
We would like to find a way to represent complex, structured data in the context of our programming language.
For example, to represent a location, we might want to associate a name, a latitude and a longitude with it.
Thus we would want to create a compound data type which carries this information.
In C, for example, this is a struct:
C
struct location {
float longitude;
float latitude;
}
REMEMBER: A language has 3 parts:
expressions and statements: how to structure simple computations
means of combination: how to structure complex computations
means of abstraction: how to build complex units
Review
When we write a function, we give it some sensible name which can then be used by a "client" programmer. We don't care about how this function is implemented. We just want to know its signature (API) and use it.
In a similar way, we want to encapsulate our data: we dont want to know how it is stored and all that. We just want to be able to use it. This is one of the key ideas behind object oriented programming.
To do this, write constructors that make objects. We also write other functions that access or change data on the object. These functions are called the "methods" of the object, and are what the client programmer uses.
First Examples
Objects thru tuples: An object for complex numbers
How might we implement such objects? First, lets think of tuples.
End of explanation
c1[0]
Explanation: But things aren't hidden so I can get through the interface:
End of explanation
c1[0]=2
Explanation: Because I used a tuple, and a tuple is immutable, I can't change this complex number once it's created.
End of explanation
def Complex2(a, b): # constructor
def dispatch(message): # capture a and b at constructor-run time
if message=="real":
return a
elif message=='imag':
return b
elif message=="str":
return "{0}+{1}i".format(a, b)
return dispatch
z=Complex2(1,2)
print(z("real"), " ", z("imag"), " ", z("str"))
Explanation: Objects thru closures
Let's try an implementation that uses a closure to capture the value of arguments.
End of explanation
def Complex3(a, b):
in_a=a
in_b=b
def dispatch(message, value=None):
nonlocal in_a, in_b
if message=='set_real' and value != None:
in_a = value
elif message=='set_imag' and value != None:
in_b = value
elif message=="real":
return in_a
elif message=='imag':
return in_b
elif message=="str":
return "{0}+{1}i".format(in_a, in_b)
return dispatch
c3=Complex3(1,2)
print(c3("real"), " ", c3("imag"), " ", c3("str"))
c3('set_real', 2)
print(c3("real"), " ", c3("imag"), " ", c3("str"))
Explanation: This looks pretty good so far.
The only problem is that we don't have a way to change the real and imaginary parts.
For this, we need to add things called setters.
Objects with Setters
End of explanation
class ComplexClass():
def __init__(self, a, b):
self.real = a
self.imaginary = b
Explanation: Python Classes and instance variables
We constructed an object system above. But Python comes with its own.
Classes allow us to define our own types in the Python type system.
End of explanation
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20ComplexClass%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20a,%20b%29%3A%0A%20%20%20%20%20%20%20%20self.real%20%3D%20a%0A%20%20%20%20%20%20%20%20self.imaginary%20%3D%20b%0A%0Ac1%20%3D%20ComplexClass%281,2%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
c1 = ComplexClass(1,2)
print(c1, c1.real)
print(vars(c1), " ",type(c1))
c1.real=5.0
print(c1, " ", c1.real, " ", c1.imaginary)
Explanation: __init__ is a special method run automatically by Python.
It is a constructor.
self is the instance of the object.
It acts like this in C++ but self is explicit.
End of explanation
class Animal():
def __init__(self, name):
self.name = name
def make_sound(self):
raise NotImplementedError
class Dog(Animal):
def make_sound(self):
return "Bark"
class Cat(Animal):
def __init__(self, name):
self.name = "A very interesting cat: {}".format(name)
def make_sound(self):
return "Meow"
Explanation: Inheritance and Polymorphism
Inheritance
Inheritance is the idea that a "Cat" is-a "Animal" and a "Dog" is-a "Animal".
Animals make sounds, but Cats Meow and Dogs Bark.
Inheritance makes sure that methods not defined in a child are found and used from a parent.
Polymorphism
Polymorphism is the idea that an interface is specified, but not necessarily implemented, by a superclass and then the interface is implemented in subclasses (differently).
[Actually Polymorphism is much more complex and interesting than this, and this definition is really an outcome of polymorphism. But we'll come to this later.]
Example: Super- and subclasses
End of explanation
a0 = Animal("David")
print(a0.name)
a0.make_sound()
a1 = Dog("Snoopy")
a2 = Cat("Hello Kitty")
animals = [a1, a2]
for a in animals:
print(a.name)
print(isinstance(a, Animal))
print(a.make_sound())
print('--------')
print(a1.make_sound, " ", Dog.make_sound)
print(a1.make_sound())
print('----')
print(Dog.make_sound(a1))
Dog.make_sound()
Explanation: Animal is the superclass (a.k.a the base class).
Dog and Cat are both subclasses (a.k.a derived classes) of the Animal superclass.
Using the Animal class
End of explanation
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20Animal%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20name%0A%20%20%20%20%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20raise%20NotImplementedError%0A%20%20%20%20%0Aclass%20Dog%28Animal%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20return%20%22Bark%22%0A%20%20%20%20%0Aclass%20Cat%28Animal%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%20%3D%20%22A%20very%20interesting%20cat%3A%20%7B%7D%22.format%28name%29%0A%20%20%20%20%20%20%20%20%0A%20%20%20%20def%20make_sound%28self%29%3A%0A%20%20%20%20%20%20%20%20return%20%22Meow%22%0A%0Aa1%20%3D%20Dog%28%22Snoopy%22%29%0Aa2%20%3D%20Cat%28%22Hello%20Kitty%22%29%0Aanimals%20%3D%20%5Ba1,%20a2%5D%0Afor%20a%20in%20animals%3A%0A%20%20%20%20print%28a.name%29%0A%20%20%20%20print%28isinstance%28a,%20Animal%29%29%0A%20%20%20%20print%28a.make_sound%28%29%29%0A%20%20%20%20print%28\'--------\'%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
Explanation: How does this all work?
End of explanation
class Animal():
def __init__(self, name):
self.name=name
print("Name is", self.name)
class Mouse(Animal):
def __init__(self, name):
self.animaltype="prey"
super().__init__(name)
print("Created %s as %s" % (self.name, self.animaltype))
class Cat(Animal):
pass
a1 = Mouse("Tom")
print(vars(a1))
a2 = Cat("Jerry")
print(vars(a2))
HTML('<iframe width="800" height="500" frameborder="0" src="http://pythontutor.com/iframe-embed.html#code=class%20Animal%28%29%3A%0A%20%20%20%20%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.name%3Dname%0A%20%20%20%20%20%20%20%20print%28%22Name%20is%22,%20self.name%29%0A%20%20%20%20%20%20%20%20%0Aclass%20Mouse%28Animal%29%3A%0A%20%20%20%20def%20__init__%28self,%20name%29%3A%0A%20%20%20%20%20%20%20%20self.animaltype%3D%22prey%22%0A%20%20%20%20%20%20%20%20super%28%29.__init__%28name%29%0A%20%20%20%20%20%20%20%20print%28%22Created%20%25s%20as%20%25s%22%20%25%20%28self.name,%20self.animaltype%29%29%0A%20%20%20%20%0Aclass%20Cat%28Animal%29%3A%0A%20%20%20%20pass%0A%0Aa1%20%3D%20Mouse%28%22Tom%22%29%0Aa2%20%3D%20Cat%28%22Jerry%22%29&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false"> </iframe>')
Explanation: Calling a superclasses initializer
Say we dont want to do all the work of setting the name variable in the subclasses.
We can set this "common" work up in the superclass and use super to call the superclass's initializer from the subclass.
There's another way to think about this:
A subclass method will be called instead of a superclass method if the method is in both the sub- and superclass and we call the subclass (polymorphism!).
If we really want the superclass method, then we can use the super built-in function.
See https://rhettinger.wordpress.com/2011/05/26/super-considered-super/
End of explanation
# Both implement the "Animal" Protocol, which consists of the one make_sound function
class Dog():
def make_sound(self):
return "Bark"
class Cat():
def make_sound(self):
return "Meow"
a1 = Dog()
a2 = Cat()
animals = [a1, a2]
for a in animals:
print(isinstance(a, Animal), " ", a.make_sound())
Explanation: Interfaces
The above examples show inheritance and polymorphism.
Notice that we didn't actually need to set up the inheritance.
We could have just defined 2 different classes and have them both make_sound.
In Java and C++ this is done more formally through Interfaces and Abstract Base Classes, respectively, plus inheritance.
In Python, this agreement to define make_sound is called duck typing.
"If it walks like a duck and quacks like a duck, it is a duck."
End of explanation
class Animal():
def __init__(self, name):
self.name=name
def __repr__(self):
class_name = type(self).__name__
return "{0!s}({1.name!r})".format(class_name, self)
r = Animal("David")
r
print(r)
repr(r)
Explanation: The Python Data Model
Duck typing is used throughout Python. Indeed it's what enables the "Python Data Model"
All python classes implicitly inherit from the root object class.
The Pythonic way is to just document your interface and implement it.
This usage of common interfaces is pervasive in dunder functions to comprise the Python data model.
Example: Printing with __repr__ and __str__
The way printing works is that Python wants classes to implement __repr__ and __str__ methods.
It will use inheritance to give the built-in objects methods when these are not defined.
Any class can define __repr__ and __str__.
When an instance of such a class is interrogated with the repr or str function, then these underlying methods are called.
We'll see __repr__ here. If you define __repr__ you have made an object sensibly printable.
__repr__
End of explanation |
14,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s * (1 - s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.shape[0] * image.shape[1] * image.shape[2], 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord=2, axis=1, keepdims=True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[2, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis=1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp / x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.dot((y - yhat), (y - yhat)))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
14,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Let's try to see if Pandas can read the .csv files coming from Weather Underground.
Step2: Now that we have a way of getting weather station data, let's do some time-based binning! But first, we will have to do a few things to make this data compatible with our sensor data. We will
* convert the times into timestamps,
* convert temperatures to degrees Celsius
Step3: From the previous cell, we see that temperature data (and for that matter, all other sensor data) begins from 17 November, 2017. So we only need to get weather station data from that date on.
Step4: Comparing Weather Stations to our Weather Data
Now let's look at the differences between the average temperatures measured by the weather station versus our measurements. We chose a weather station close to Etcheverry Hall, so the measurements should be about the same. If the difference is relatively constant but nonzero, that is fine. That would correspond to calibration errors in our sensors (or the weather stations'!), but they should be correlated, and subtraction by a constant does not change correlations, so we would be confident that our correlations with temperature, pressure and humidity with radiation are meaningful.
First, let's try to use data averaged over half a day (43200 seconds). The best way to do look for both correlation and differences together is to plot the linear regression. We will see a straight line in the data if there is high correlation, and the slope of the line will be close to 1 if the data are the same.
Step5: Uggh! There are a few influential points that should not exist. Let's get rid of them too in weather_station_diff_and_corr. | Python Code:
CSV_URL = 'https://www.wunderground.com/weatherstation/WXDailyHistory.asp?\
ID=KCABERKE22&day=24&month=06&year=2018&graphspan=day&format=1'
df = pd.read_csv(CSV_URL, index_col=False)
df
# remove every other row from the data because they contain `<br>` only
dg = df.drop([2*i + 1 for i in range(236)])
dg
def get_clean_df(location_id, date):
Get weather data from `location_id` on `date`, then
remove all the `<br>` tags in the file.
`date` should be a list/tuple of 3 strings in the format
[MM, DD, YYYY].
url = f'''\
https://www.wunderground.com/weatherstation/WXDailyHistory.asp?\
ID={location_id}&\
day={date[1]}&\
month={date[0]}&\
year={date[2]}&\
graphspan=day&\
format=1'''
# print(f'Getting data from {url}')
data = pd.read_csv(url, index_col=False)
# drop every other row because it contains `<br>`
return data.drop([2*i + 1 for i in range(data.shape[0] // 2)])
# ws_data = get_clean_df('KCABERKE105', ['05', '06', '2018']) # weather station data
Explanation: Let's try to see if Pandas can read the .csv files coming from Weather Underground.
End of explanation
def process_data(data_df):
def deg_f_to_c(deg_f):
return (5 / 9) * (deg_f - 32)
def inhg_to_mbar(inhg):
return 33.863753 * inhg
for idx, time, tempf, dewf, pressure, *_ in data_df.itertuples():
data_df.loc[idx, 'Time'] = datetime.strptime(time, '%Y-%m-%d %H:%M:%S').timestamp()
data_df.loc[idx, 'Temperature'] = deg_f_to_c(tempf)
data_df.loc[idx, 'Dewpoint'] = deg_f_to_c(dewf)
data_df.loc[idx, 'Pressure'] = inhg_to_mbar(pressure)
return data_df.drop(['TemperatureF', 'DewpointF', 'PressureIn', 'WindDirection', 'Conditions', 'Clouds',
# 'SolarRadiationWatts/m^2',
'SoftwareType', 'DateUTC<br>'], axis=1)
process_data(get_clean_df('KCABERKE105', ['05', '06', '2018']))
DATA_DIR = 'binned_data'
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_2400.csv'),
header=0, names=['unix_time', 'temperature'])
for idx, timestamp, temp in temperature_data.itertuples():
temperature_data.loc[idx, 'datetime'] = datetime.fromtimestamp(timestamp)
temperature_data[26230:]
Explanation: Now that we have a way of getting weather station data, let's do some time-based binning! But first, we will have to do a few things to make this data compatible with our sensor data. We will
* convert the times into timestamps,
* convert temperatures to degrees Celsius
End of explanation
start_time = date.fromtimestamp(int(temperature_data.loc[26230, 'unix_time']))
end_time = date.fromtimestamp(int(temperature_data.loc[temperature_data.shape[0] - 1, 'unix_time']))
current_time = start_time
data_df = pd.DataFrame([])
while current_time < end_time:
# store the result of the query in dataframe `data_df`
temporary = process_data(get_clean_df('KCABERKE105', [str(current_time.month),
str(current_time.day),
str(current_time.year)]))
temp_cols = list(temporary.columns.values)
temporary = temporary[[temp_cols[6]] + temp_cols[:6] + temp_cols[7:]]
data_df = pd.concat([data_df, temporary], ignore_index=True)
current_time = current_time + timedelta(days=1)
data_df
# data_df.rename({'Time': 'deviceTime_unix'}, axis=1, inplace=True)
data_df.to_csv('wunderground_data/data_0.csv', na_rep='nan', index=False)
Explanation: From the previous cell, we see that temperature data (and for that matter, all other sensor data) begins from 17 November, 2017. So we only need to get weather station data from that date on.
End of explanation
def weather_station_diff_and_corr(interval):
ws_temp = pd.read_csv(f'binned_data/ws_data_Temperature_{interval}.csv',
header=0, names=['utime', 'temp'], usecols=[1])
ws_pressure = pd.read_csv(f'binned_data/ws_data_Pressure_{interval}.csv',
header=0, names=['utime', 'pressure'], usecols=[1])
ws_humidity = pd.read_csv(f'binned_data/ws_data_Humidity_{interval}.csv',
header=0, names=['utime', 'humid'], usecols=[1])
our_temp = pd.read_csv(f'binned_data/data_temperature_{interval}.csv',
header=0, names=['utime', 'ws_temp'], usecols=[1])
our_pressure = pd.read_csv(f'binned_data/data_pressure_{interval}.csv',
header=0, names=['utime', 'ws_pressure'], usecols=[1])
our_humidity = pd.read_csv(f'binned_data/data_humidity_{interval}.csv',
header=0, names=['utime', 'ws_humid'], usecols=[1])
temps = pd.concat([ws_temp, our_temp], axis=1).dropna(axis=0)
pressures = pd.concat([ws_pressure, our_pressure], axis=1).dropna(axis=0)
humids = pd.concat([ws_humidity, our_humidity], axis=1).dropna(axis=0)
# temperature plot
g1 = sns.jointplot(x='temp', y='ws_temp', data=temps, kind='reg')
plt.xlabel('Our Temperature ($^oC$)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Temperature ($^oC$)', rotation=90, fontdict={'fontsize': 12})
g1.fig.suptitle('Temperature', fontsize=16, fontweight='semibold',
x=0.4, y=1.03)
# presssure plot
g2 = sns.jointplot(x='pressure', y='ws_pressure', data=pressures, kind='reg')
plt.xlabel('Our Pressure (millibars)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Pressure (millibars)', rotation=90, fontdict={'fontsize': 12})
g2.fig.suptitle('Pressure', fontsize=16, fontweight='semibold',
x=0.4, y=1.03)
# humidity plot
g3 = sns.jointplot(x='humid', y='ws_humid', data=humids, kind='reg')
plt.xlabel('Our Humidity (percentage)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Humidity (percentage)', rotation=90, fontdict={'fontsize': 12})
g3.fig.suptitle('Humidity', fontsize=14, fontweight='semibold',
x=0.4, y=1.03)
return temps, pressures, humids
t, h, p = weather_station_diff_and_corr(43200)
Explanation: Comparing Weather Stations to our Weather Data
Now let's look at the differences between the average temperatures measured by the weather station versus our measurements. We chose a weather station close to Etcheverry Hall, so the measurements should be about the same. If the difference is relatively constant but nonzero, that is fine. That would correspond to calibration errors in our sensors (or the weather stations'!), but they should be correlated, and subtraction by a constant does not change correlations, so we would be confident that our correlations with temperature, pressure and humidity with radiation are meaningful.
First, let's try to use data averaged over half a day (43200 seconds). The best way to do look for both correlation and differences together is to plot the linear regression. We will see a straight line in the data if there is high correlation, and the slope of the line will be close to 1 if the data are the same.
End of explanation
def remove_influential_pts(df: pd.DataFrame, z_star: float):
if df.shape[1] != 2:
raise ValueError('DataFrame must have shape `Nx2`')
for idx, elem1, elem2 in df.itertuples():
if (abs((elem1 - df.iloc[:, 0].mean()) / df.iloc[:, 0].std()) > z_star or
abs((elem2 - df.iloc[:, 1].mean()) / df.iloc[:, 1].std()) > z_star):
df.loc[idx] = float('nan')
return df.dropna()
def weather_station_diff_and_corr(interval):
ws_temp = pd.read_csv(f'binned_data/ws_data_Temperature_{interval}.csv',
header=0, names=['utime', 'temp'], usecols=[1])
ws_pressure = pd.read_csv(f'binned_data/ws_data_Pressure_{interval}.csv',
header=0, names=['utime', 'pressure'], usecols=[1])
ws_humidity = pd.read_csv(f'binned_data/ws_data_Humidity_{interval}.csv',
header=0, names=['utime', 'humid'], usecols=[1])
our_temp = pd.read_csv(f'binned_data/data_temperature_{interval}.csv',
header=0, names=['utime', 'ws_temp'], usecols=[1])
our_pressure = pd.read_csv(f'binned_data/data_pressure_{interval}.csv',
header=0, names=['utime', 'ws_pressure'], usecols=[1])
our_humidity = pd.read_csv(f'binned_data/data_humidity_{interval}.csv',
header=0, names=['utime', 'ws_humid'], usecols=[1])
temps = pd.concat([ws_temp, our_temp], axis=1).dropna(axis=0)
pressures = pd.concat([ws_pressure, our_pressure], axis=1).dropna(axis=0)
humids = pd.concat([ws_humidity, our_humidity], axis=1).dropna(axis=0)
temps = remove_influential_pts(temps, 2.5)
pressures = remove_influential_pts(pressures, 1.4)
humids = remove_influential_pts(humids, 4.)
# temperature plot
g1 = sns.jointplot(x='temp', y='ws_temp', data=temps, kind='reg')
m1, b1 = linregress(temps['temp'], temps['ws_temp'])[0:2]
plt.xlabel('Our Temperature ($^oC$)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Temperature ($^oC$)', rotation=90, fontdict={'fontsize': 12})
g1.fig.suptitle(f'Temperature (${m1:.2f}x+{b1:.2f}$)', fontsize=16, fontweight='semibold',
x=0.4, y=1.03)
# presssure plot
g2 = sns.jointplot(x='pressure', y='ws_pressure', data=pressures, kind='reg')
m2, b2 = linregress(pressures['pressure'], pressures['ws_pressure'])[0:2]
plt.xlabel('Our Pressure (millibars)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Pressure (millibars)', rotation=90, fontdict={'fontsize': 12})
g2.fig.suptitle(f'Pressure (${m2:.2f}x+{b2:.2f}$)', fontsize=16, fontweight='semibold',
x=0.4, y=1.03)
# humidity plot
g3 = sns.jointplot(x='humid', y='ws_humid', data=humids, kind='reg')
m3, b3 = linregress(humids['humid'], humids['ws_humid'])[0:2]
plt.xlabel('Our Humidity (percentage)', fontdict={'fontsize': 12})
plt.ylabel('Weather Station Humidity (percentage)', rotation=90, fontdict={'fontsize': 12})
g3.fig.suptitle(f'Humidity (${m3:.2f}x+{b3:.2f}$)', fontsize=14, fontweight='semibold',
x=0.4, y=1.03)
return temps, pressures, humids
t, p, h = weather_station_diff_and_corr(43200)
t, p, h = weather_station_diff_and_corr(432000)
Explanation: Uggh! There are a few influential points that should not exist. Let's get rid of them too in weather_station_diff_and_corr.
End of explanation |
14,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Context for this discussion (also see #19)
Step1: Inspired by varlens examples, here is how this simple function works
Step2: Let's compare the contexts for the variant and the reference alleles | Python Code:
import pysam
import numpy as np
import pandas as pd
def contexify(samfile, chromosome, location, allele, radius):
# This will be our score board
counts = np.zeros(shape=((radius * 2) + 1, 5)) # 5 slots for each of the bases
d = pd.DataFrame(counts,
index=range(location - radius, location + radius + 1),
columns=['A', 'C', 'G', 'T', 'N'])
# Let pysam pileup the reads covering our location of interest for us
for column in samfile.pileup(chromosome, location, location + 1):
# By default, our region is bigger than we want,
# so skip the other columns for now
if column.pos != location:
continue
# Iterate over reads covering our location
for read in column.pileups:
if not read.is_del and not read.is_refskip: # clean up
pos = read.query_position # relative location
seq = read.alignment.query_sequence # read
# Filter reads that do not support the variant
if seq[pos] is not allele:
continue
# Cursor is compatible with our score board indexing
cursor = location - pos - 1
for base in seq: # Move along the sequence one base at a time
cursor = cursor + 1
# If the region is beyond our window, discard the data
if (cursor < location - radius) or (cursor > location + radius):
continue
# Count++ within our scoreboard
count = d[base][cursor]
d[base][cursor] = count + 1
return d.transpose() # Transpose it to make it look more natural
Explanation: Context for this discussion (also see #19):
```
alex [5:54 PM]
Hey Arman, I talked with Ryan for a while today about allele specific transcript selection. He’s going to try find variant reads with an FM index query. Unclear how to reconcile the variant reads once they’re all gathered.
Anyway, it might be nice to benchmark against a dumb baseline. If you have time, what do you think of putting together something that just takes aligned RNA, finds all the reads containing a variant (probably using varlens) and then just the most common nucleotide for each flanking position?
arman [5:55 PM]
sure, sounds easy enough
[5:55]
so the input files will be a VCF file and a BAM file and we will output some summary stats on each variant if I understand you correctly
[5:56]
but what do you exactly mean by a flanking position? The bases next to the variant?
alex [5:57 PM]
Yeah. If you center the variant nucleotides, and then count up the {A,C,T,G} content at every position -1,-2,-3,&c to the left and +1,+2,+3, &c to the right (and normalize by number of reads containing those positions), can we just take the most common?
[5:58]
The output could be something like:
-5 -4 -3 -2 -1 _ _ +1 +2 +3 +4 +5
A 0.1 0.2
C 0.9
T
G
(edited)
[5:59]
Oh jeez, that’s getting tedious to fill out
arman [5:59 PM]
yeah, sure -- something like a PSM representing a motif
alex [5:59 PM]
But you’d have independent probability distributions over {A,G,T,G} at each position before and after the variant nucleotides
arman [6:00 PM]
happy to do that. I will try to get something going during the weekend but the worst case scenerio, we will have it by the end of Monday.
alex [6:00 PM]
Extending out to something like 75bp in both directions (since 25mer vaccine peptide = 75bp)
[6:00]
The variant nucleotides should have no uncertainty in them since we filter the reads to definitely contain the variant
[6:01]
And the best estimate of the sequence would be just taking the most probable nucleotide at each position
[6:01]
It’s a bit naive since it treats positions independently, but if the entropy is low enough (i.e. not much splicing diversity) then could be OK
[6:02]
And relying on the alignments is, one the one hand making it harder to detect weird splices, but on the other hand could be safer than reference-free fishing for matching sequences.
arman [6:03 PM]
I agree. I think this would be great start and there is always the option to extend what we have from this and add some experimental stuff on top of it.
[6:03]
Cool, sounds like a great plan.
alex [6:05 PM]
Awesome
```
End of explanation
import urllib
urltemplate = "https://raw.githubusercontent.com/hammerlab/varlens/master/test/data/CELSR1/bams/{}"
url = urllib.URLopener()
url.retrieve(urltemplate.format("bam_5.bam"), "bam_5.bam")
url.retrieve(urltemplate.format("bam_5.bam.bai"), "bam_5.bam.bai")
samfile = pysam.AlignmentFile("bam_5.bam", "rb")
# C -> T variant at this particular locus
chromosome = "chr22"
location = 46930258
radius = 5
Explanation: Inspired by varlens examples, here is how this simple function works:
End of explanation
allele1 = "T"
contexify(samfile, chromosome, location, allele1, radius)
allele2 = "C"
contexify(samfile, chromosome, location, allele2, radius)
Explanation: Let's compare the contexts for the variant and the reference alleles:
End of explanation |
14,138 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
powerindex
A python library to compute power indices
Installation
Step1: Now calculate Banzhaf and Shapley-Shubik power indices
Step2: Function calc() computes all available indices.
Thus, in this simple example both indices give 100% to 0% distribution.
Now let's changes the seats distribution to the parity and see what happens
Step3: As the result, the power distribution is also at parity.
Now, consider a non-trivial, but still simple examples from Wikipedia
Step4: Interpretation is simple. A committee where 4 parties hold 40%, 30%, 20% and 10% of seats with required qualified majority of 60%, have 41.7%, 25%, 25%, 8.3% of power respectively.
In this example, having 2 or 3 seats leads to the same level of power.
Another example
Step5: Notice that in the previous two examples Banzhaf and Shapley-Shubik indices coincides. It doesn't hold in general even in the games of 3 voters
Step6: Plot results
There's a possibility to plot the power distribution as a pie chart
Step7: As you can see on the plot, the parties have numbers. In order, to put their names on the chart you need to work with Party class.
Let's take Europen Economic Community (EEC) in the years 1958-1972, its members were Germany (4 votes), France (4 votes), Italy (4 votes), Belgium (2 votes), Netherlands (2 votes) and Luxembourg (1 vote) with qualified majority of 12 votes | Python Code:
%matplotlib inline
import powerindex as px
game=px.Game(quota=51,weights=[51,49])
Explanation: powerindex
A python library to compute power indices
Installation: pip install powerindex
What is all about
The aim of the package is to compute different power indices of the so-called weighted voting systems (games). This package was employed to perform calculations at powdist.com
Players have weights and can form coalitions. A coalition that achieves the required threshold wins.
To start with a simple example, consider a system with two parties A and B having 51 and 49 seats respectively with a simple majority rule (i.e. the threshold is 51 seats). How much power do they have? It may appear that according to the number of the seats they have 51% and 49% respectively.
However, party A can impose any decision without cooperating with party B.
It leads to a conclusion that any reasonable rule would assign to party A 100% of the power (since it wins without cooperation) and to the party B 0% of the power and not 51% to 49%.
The most popular approaches to measure power are Banzhaf and Shapley-Shubik power indices.
How to use it
Let's implement an example from the introduction:
End of explanation
game.calc_banzhaf()
print(game.banzhaf)
game.calc_shapley_shubik()
print(game.shapley_shubik)
Explanation: Now calculate Banzhaf and Shapley-Shubik power indices:
End of explanation
game=px.Game(51,weights=[50,50])
game.calc()
print(game.banzhaf)
print(game.shapley_shubik)
Explanation: Function calc() computes all available indices.
Thus, in this simple example both indices give 100% to 0% distribution.
Now let's changes the seats distribution to the parity and see what happens:
End of explanation
game=px.Game(6,[4, 3, 2, 1])
game.calc_banzhaf()
print(game.banzhaf)
Explanation: As the result, the power distribution is also at parity.
Now, consider a non-trivial, but still simple examples from Wikipedia:
End of explanation
game=px.Game(6,[3, 2, 1, 1])
game.calc_banzhaf()
print(game.banzhaf)
Explanation: Interpretation is simple. A committee where 4 parties hold 40%, 30%, 20% and 10% of seats with required qualified majority of 60%, have 41.7%, 25%, 25%, 8.3% of power respectively.
In this example, having 2 or 3 seats leads to the same level of power.
Another example:
End of explanation
game=px.Game(4,[3, 2, 1])
game.calc() # again it calculates all available indices
print("Banzhaf index:")
print(game.banzhaf)
print("Shapley-Shubik index:")
print(game.shapley_shubik)
Explanation: Notice that in the previous two examples Banzhaf and Shapley-Shubik indices coincides. It doesn't hold in general even in the games of 3 voters:
End of explanation
game=px.Game(4,[3, 2, 1])
game.calc()
game.pie_chart()
Explanation: Plot results
There's a possibility to plot the power distribution as a pie chart:
End of explanation
countries={"Germany":4,"France":4,"Italy":4,"Belgium":2,"Netherlands":2,"Luxembourg":1}
parties=[px.Party(countries[country],country) for country in countries]
game=px.Game(12,parties=parties)
game.calc()
game.pie_chart()
Explanation: As you can see on the plot, the parties have numbers. In order, to put their names on the chart you need to work with Party class.
Let's take Europen Economic Community (EEC) in the years 1958-1972, its members were Germany (4 votes), France (4 votes), Italy (4 votes), Belgium (2 votes), Netherlands (2 votes) and Luxembourg (1 vote) with qualified majority of 12 votes:
End of explanation |
14,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Two-Level
Step2: We'll just check that the pulse area is what we want.
Step3: Plot Output
Step4: Analysis
The $4 \pi$ sech pulse breaks up into two $2 \pi$ pulses, which travel at a speed according to their width.
Movie | Python Code:
import numpy as np
SECH_FWHM_CONV = 1./2.6339157938
t_width = 1.0*SECH_FWHM_CONV # [τ]
print('t_width', t_width)
mb_solve_json =
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"rabi_freq_t_args": {
"n_pi": 4.0,
"centre": 0.0,
"width": %f
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 2
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 240,
"z_min": -0.5,
"z_max": 1.5,
"z_steps": 100,
"interaction_strengths": [
10.0
],
"savefile": "mbs-two-sech-4pi"
}
%(t_width)
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
Explanation: Two-Level: Sech Pulse 4π — Pulse Breakup
Define the Problem
First we need to define a sech pulse with the area we want. We'll fix the width of the pulse and the area to find the right amplitude.
The full-width at half maximum (FWHM) $t_s$ of the sech pulse is related to the FWHM of a Gaussian by a factor of $1/2.6339157938$. (See §3.2.2 of my PhD thesis).
End of explanation
print('The input pulse area is {0}'.format(
np.trapz(mbs.Omegas_zt[0,0,:].real, mbs.tlist)/np.pi))
Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
Explanation: We'll just check that the pulse area is what we want.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 3.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
fig, ax = plt.subplots(figsize=(16, 5))
ax.plot(mbs.zlist, mbs.fields_area()[0]/np.pi)
ax.set_ylim([0.0, 8.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)')
Explanation: Plot Output
End of explanation
# C = 0.1 # speed of light
# Y_MIN = 0.0 # Y-axis min
# Y_MAX = 4.0 # y-axis max
# ZOOM = 2 # level of linear interpolation
# FPS = 60 # frames per second
# ATOMS_ALPHA = 0.2 # Atom indicator transparency
# FNAME = "images/mb-solve-two-sech-4pi"
# FNAME_JSON = FNAME + '.json'
# with open(FNAME_JSON, "w") as f:
# f.write(mb_solve_json)
# !make-mp4-fixed-frame.py -f $FNAME_JSON -c $C --fps $FPS --y-min $Y_MIN --y-max $Y_MAX \
# --zoom $ZOOM --atoms-alpha $ATOMS_ALPHA #--peak-line --c-line
# FNAME_MP4 = FNAME + '.mp4'
# !make-gif-ffmpeg.sh -f $FNAME_MP4 --in-fps $FPS
# from IPython.display import Image
# Image(url=FNAME_MP4 +'.gif', format='gif')
Explanation: Analysis
The $4 \pi$ sech pulse breaks up into two $2 \pi$ pulses, which travel at a speed according to their width.
Movie
End of explanation |
14,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome
Welcome to Pineapple, the next generation scientific notebook.
Run Python code
Step1: You can make plots right in the notebook
Step2: Matrix operations are built-in
Pineapple uses Python 3.5, so you can use @ to do matrix multiplication.
Step3: Typeset pretty formulas
$$ \sum_{i=1}^n i = \frac{n(n+1)}{2} $$
$$ \sum_{i=1}^n i = \frac{n(n+1)}{2} $$
Work with images | Python Code:
2 ** 64
Explanation: Welcome
Welcome to Pineapple, the next generation scientific notebook.
Run Python code
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10)
plt.plot(x, np.sin(x));
Explanation: You can make plots right in the notebook
End of explanation
import numpy as np
A = np.linspace(1, 9, 9).reshape(3, 3)
A @ A
Explanation: Matrix operations are built-in
Pineapple uses Python 3.5, so you can use @ to do matrix multiplication.
End of explanation
import urllib.request
import io
png = io.BytesIO(
urllib.request.urlopen('http://i.imgur.com/IyUsYQ8.png').read())
img = plt.imread(png)
plt.imshow(img);
Explanation: Typeset pretty formulas
$$ \sum_{i=1}^n i = \frac{n(n+1)}{2} $$
$$ \sum_{i=1}^n i = \frac{n(n+1)}{2} $$
Work with images
End of explanation |
14,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Exercise-notebook-4
Step1: Exercise 1
Step2: A URL for downloading all the data as a CSV file can also be obtained via "View API Link".
It must be modified so that it returns up to 5000 records (set max=5000) in the CSV format (&fmt=csv).
Step3: Load the data in from the specified location, ensuring that the various codes are read as strings. Preview the first few rows of the dataset.
Step4: Limit the columns to make the dataframe easier to work with by selecting just a subset of them.
Step5: Derive two new dataframes that separate out the 'World' partner data and the data for individual partner countries.
Step6: You may wish to store a local copy as a CSV file, for example
Step7: To load the data back in
Step8: If you are on a Windows computer, data files may sometimes be saved using a file encoding (Latin-1). Pandas may not recognise this by default, in which case you will see a UnicodeDecodeError.
In such cases, opening files in read_excel() or read_csv() using the parameter encoding="ISO-8859-1" or encoding = "Latin-1" should fix the problem. For example, edit the previous command to read
Step9: Sorting the data
Having loaded in the data, find the most valuable partners in terms of import trade flow during a particular month by sorting the data by decreasing trade value and then selecting the top few rows.
Step10: Task
To complete these tasks you could copy this notebook and amend the code or create a new notebook to do the analysis for your chosen data.
Using the Comtrade Data website, identify a dataset that describes the import and export trade flows for a particular service or form of goods between your country (as reporter) and all ('All') the other countries in the world. Get the monthly data for all months in 2014.
Download the data as a CSV file and add the file to the same folder as the one containing this notebook. Load the data in from the file into a pandas dataframe. Create an easier to work with dataframe that excludes data associated with the 'World' partner. Sort this data to see which countries are the biggest partners in terms of import and export trade flow.
Step11: Now go back to the 'Practice getting data' step in FutureLearn to discuss and mark it complete.
Exercise 2
Step12: Inspect the first few rows associated with a particular group
Step13: As well as grouping on a single term, you can create groups based on multiple columns by passing in several column names as a list. For example, generate groups based on commodity code and trade flow, and then preview the keys used to define the groups.
Step14: Retrieve a group based on multiple group levels by passing in a tuple that specifies a value for each index column. For example, if a grouping is based on the 'Partner' and 'Trade Flow' columns, the argument of get_group has to be a partner/flow pair, like ('France', 'Import') to get all rows associated with imports from France.
Step15: To find the leading partner for a particular commodity, group by commodity, get the desired group, and then sort the result.
Step16: Task
Using your own data set from Exercise 1, try to group the data in a variety of ways, finding the most significant trade partner in each case
Step17: Now go back to the 'Splitting a dataset by grouping' step in FutureLearn to discuss and mark it complete.
Exercise 3
Step18: So that's 222 million dollars or so on the 0401 commodity, and 341 million dollars or so on 0402.
If you total (sum) up all the individual country contributions, you should get similar amounts.
Step19: Not far off – there are perhaps a few rounding errors that would account for the odd couple of million that appear to be missing...
Finding top ranked elements within a group
To find the leading import partners across all the milk products, group by partner, sum (total) the trade value within each group, and then sort the result in descending order before displaying the top few entries.
Step20: Generating simple charts
One of the useful features of the aggregate() method is that it returns an object that can be plotted from directly, in this example a horizontal bar chart.
Step21: Generating alternative groupings
Reports can also be generated to show the total imports per month for each commodity
Step22: The groupby() method splits the data into separate distinct groups of rows, and then the aggregate() method takes each group of rows from the results of the groupby() operation, applies the specified aggregation function, and then combines the results in the output.
The aggregation function itself is applied to all columns of an appropriate type. In the example, the only numeric column that makes sense to aggregate over is the trade value column.
As well as built in summary operations, such as finding the total (sum), or maximum or minimum value in a group (max, min), aggregating functions imported from other Python packages can also be used. As shown in the next example, the numpy package has a function mean that will calculate the mean (simple average) value for a set of values.
Generating several aggregation values at the same time
To generate several aggregate reports in a single line of code, provide a list of several aggregating operations to the aggregate() method
Step23: By combining different grouping combinations and aggregate functions, you can quickly ask a range of questions over the data or generate a wide variety of charts from it.
Sometimes, however, it can be quite hard to see any 'outstanding' values in a complex pivot table. In such cases, a chart may help you see which values are significantly larger or smaller than the other values.
For example, plot the maximum value by month across each code/period combination to see which month saw the maximum peak flow of imports from a single partner.
Step24: For the 0401 commodity, the largest single monthly trade flow in 2014 appears to have taken place in September (201409). For the 0402 commodity, the weakest month was December, 2014.
To chart the mean trade flows by month, simply aggregate on the mean rather than the max.
In some cases, you might want to sort the order of the bars in a bar chart by value. By default, the sort() operator sorts a series or dataframe 'in place'. That is, it sorts the dataframe and doesn't return anything. Use the inplace=False parameter to return the sorted values so that the plot function can work on them, or alternatively use the order() function.
The following chart displays the total imports for the combined commodities by partner (including the World partner) for the top five partners
Step25: Tasks
For the 0402 trade item, which months saw the greatest average (mean) activity? How does that compare with the maximum flows in each month? How does it compare with the total flow in each month?
Download your own choice of monthly dataset over one or two years containing both import and export data. (To start with, you may find it convenient to split the data into two dataframes, one for exports and one for imports.)
Using your own data
Step26: Now go back to the 'Summary operations' step in FutureLearn to discuss and mark it complete.
Exercise 4
Step27: One reason for filtering a dataset might be to exclude 'sparse' or infrequently occurring items, such as trade partners who only seem to trade for less than six months of the year.
To select just the groups that contain more than a certain number of rows, define a function to test the length (that is, the number of rows) of each group and return a True or False value depending on the test.
In the following case, group by trade flow and only return rows from groups containing three or more rows.
Step28: You can also select groups based on other group properties. For example, you might select just the groups where the total value for a particular column within a group exceeds a certain threshold.
In the following case, select just those commodities where the sum of import and export values is greater than a certain amount to indicate which ones have a large value of trade, in whatever direction, associated with them. First group by the commodity, then filter on the group property of interest.
Step29: Filtering on the Comtrade data
Now try filtering the Comtrade data relating to the milk imports. Start by creating a subset of the data containing only rows where the total trade value of imports for a particular commodity and partner is greater than $25 million (that is, 25000000).
Step30: Check the filtering by grouping on the commodity and partner and summing the result.
Step31: As before, you can plot the results.
Step32: Logical tests can be combined in a filter function, for example testing for partners that only appear to trade infrequently or for small total amounts in any particular commodity.
Step33: In this report, many of the listed countries appear to have traded in only one or two months; but while Hungary traded concentrated/sweetened products eight times, the total trade value was not very significant at all.
Tasks
Filter the dataset so that it only contains rows where the total exports across all the milk products for a particular country are at least two million dollars in any given monthly period. (HINT
Step34: Task#2
Generate a chart from that dataset that displays the sum total trade value for each partner. (HINT
Step35: Task#3
Using your own monthly data for a single year, which countries only trade in your selected trade item rarely or for small amounts? Which partners trade on a regular basis (for example, in at least nine of the months)?
Step36: Task#4
Can you also find countries that trade regularly but only for small amounts (for example whose maximum monthly trade value is less than a certain threshold amount) or who trade infrequently but for large amounts (or other combinations thereof)?
Step37: Now go back to the 'Filtering groups' step in FutureLearn to discuss and mark it complete.
Exercise 5
Step38: Task
Try to come up with some of your own questions and then see if you can use the pivot table to answer them.
For example, see if you can use the table to find
Step39: Getting started with pivot tables in pandas
The pandas library provides a pivot_table() function into which you can pass the elements needed to define the pivot table view you would like to generate over a particular dataset.
If you inspect the documentation for the pandas pivot_table() function, you will see that it is quite involved (but DON'T PANIC!).
Step40: You can start to use the pivot table quite straightforwardly, drawing inspiration from the way you configured the interactive pivot table. The function itself takes the form
Step41: If you just want to use a single data column from the original dataset to specify the row (that is, the index) groupings or the column groupings, you don't need to use a list, just pass in the name of the appropriate original data column.
So, to look at rows grouped by year, country and commodity, and split columns out by trade flow
Step42: One of the features of the interactive pivot table you did not explore was its ability to generate bar chart style views over the pivoted data as well as tabulated results. (In fact, this requires a plugin to the pivot table that has not been installed.)
In the same way that you produced charts from pandas dataframes previously, you can visualise the contents of the dataframe produced from the pivot table operation. | Python Code:
import sys
sys.version
import warnings
warnings.simplefilter('ignore', FutureWarning)
import matplotlib
matplotlib.rcParams['axes.grid'] = True # show gridlines by default
%matplotlib inline
from pandas import *
show_versions()
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Exercise-notebook-4:-Grouping-your-data" data-toc-modified-id="Exercise-notebook-4:-Grouping-your-data-1"><span class="toc-item-num">1 </span>Exercise notebook 4: Grouping your data</a></div><div class="lev1 toc-item"><a href="#Env" data-toc-modified-id="Env-2"><span class="toc-item-num">2 </span>Env</a></div><div class="lev2 toc-item"><a href="#Exercise-1:-Getting-Comtrade-data-into-your-notebook" data-toc-modified-id="Exercise-1:-Getting-Comtrade-data-into-your-notebook-21"><span class="toc-item-num">2.1 </span>Exercise 1: Getting Comtrade data into your notebook</a></div><div class="lev3 toc-item"><a href="#Subsetting-Your-Data" data-toc-modified-id="Subsetting-Your-Data-211"><span class="toc-item-num">2.1.1 </span>Subsetting Your Data</a></div><div class="lev3 toc-item"><a href="#Sorting-the-data" data-toc-modified-id="Sorting-the-data-212"><span class="toc-item-num">2.1.2 </span>Sorting the data</a></div><div class="lev3 toc-item"><a href="#Task" data-toc-modified-id="Task-213"><span class="toc-item-num">2.1.3 </span>Task</a></div><div class="lev2 toc-item"><a href="#Exercise-2:-Grouping-data" data-toc-modified-id="Exercise-2:-Grouping-data-22"><span class="toc-item-num">2.2 </span>Exercise 2: Grouping data</a></div><div class="lev3 toc-item"><a href="#Grouping-the-data" data-toc-modified-id="Grouping-the-data-221"><span class="toc-item-num">2.2.1 </span>Grouping the data</a></div><div class="lev3 toc-item"><a href="#Task" data-toc-modified-id="Task-222"><span class="toc-item-num">2.2.2 </span>Task</a></div><div class="lev2 toc-item"><a href="#Exercise-3:-Experimenting-with-Split-Apply-Combine-–-Summary-reports" data-toc-modified-id="Exercise-3:-Experimenting-with-Split-Apply-Combine-–-Summary-reports-23"><span class="toc-item-num">2.3 </span>Exercise 3: Experimenting with Split-Apply-Combine – Summary reports</a></div><div class="lev3 toc-item"><a href="#Aggregation-operations-–-Generating-Summary-reports" data-toc-modified-id="Aggregation-operations-–-Generating-Summary-reports-231"><span class="toc-item-num">2.3.1 </span>Aggregation operations – Generating <em>Summary</em> reports</a></div><div class="lev3 toc-item"><a href="#Finding-top-ranked-elements-within-a-group" data-toc-modified-id="Finding-top-ranked-elements-within-a-group-232"><span class="toc-item-num">2.3.2 </span>Finding top ranked elements within a group</a></div><div class="lev3 toc-item"><a href="#Generating-simple-charts" data-toc-modified-id="Generating-simple-charts-233"><span class="toc-item-num">2.3.3 </span>Generating simple charts</a></div><div class="lev3 toc-item"><a href="#Generating-alternative-groupings" data-toc-modified-id="Generating-alternative-groupings-234"><span class="toc-item-num">2.3.4 </span>Generating alternative groupings</a></div><div class="lev3 toc-item"><a href="#Generating-several-aggregation-values-at-the-same-time" data-toc-modified-id="Generating-several-aggregation-values-at-the-same-time-235"><span class="toc-item-num">2.3.5 </span>Generating several aggregation values at the same time</a></div><div class="lev3 toc-item"><a href="#Tasks" data-toc-modified-id="Tasks-236"><span class="toc-item-num">2.3.6 </span>Tasks</a></div><div class="lev2 toc-item"><a href="#Exercise-4:-Filtering-groups" data-toc-modified-id="Exercise-4:-Filtering-groups-24"><span class="toc-item-num">2.4 </span>Exercise 4: Filtering groups</a></div><div class="lev3 toc-item"><a href="#Filtering-on-the-Comtrade-data" data-toc-modified-id="Filtering-on-the-Comtrade-data-241"><span class="toc-item-num">2.4.1 </span>Filtering on the Comtrade data</a></div><div class="lev3 toc-item"><a href="#Tasks" data-toc-modified-id="Tasks-242"><span class="toc-item-num">2.4.2 </span>Tasks</a></div><div class="lev4 toc-item"><a href="#Task#1" data-toc-modified-id="Task#1-2421"><span class="toc-item-num">2.4.2.1 </span>Task#1</a></div><div class="lev4 toc-item"><a href="#Task#2" data-toc-modified-id="Task#2-2422"><span class="toc-item-num">2.4.2.2 </span>Task#2</a></div><div class="lev4 toc-item"><a href="#Task#3" data-toc-modified-id="Task#3-2423"><span class="toc-item-num">2.4.2.3 </span>Task#3</a></div><div class="lev4 toc-item"><a href="#Task#4" data-toc-modified-id="Task#4-2424"><span class="toc-item-num">2.4.2.4 </span>Task#4</a></div><div class="lev2 toc-item"><a href="#Exercise-5:-Interactive-pivot-table" data-toc-modified-id="Exercise-5:-Interactive-pivot-table-25"><span class="toc-item-num">2.5 </span>Exercise 5: Interactive pivot table</a></div><div class="lev2 toc-item"><a href="#Task" data-toc-modified-id="Task-26"><span class="toc-item-num">2.6 </span>Task</a></div><div class="lev2 toc-item"><a href="#Exercise-6:-Pivot-tables-with-pandas" data-toc-modified-id="Exercise-6:-Pivot-tables-with-pandas-27"><span class="toc-item-num">2.7 </span>Exercise 6: Pivot tables with pandas</a></div><div class="lev3 toc-item"><a href="#Getting-started-with-pivot-tables-in-pandas" data-toc-modified-id="Getting-started-with-pivot-tables-in-pandas-271"><span class="toc-item-num">2.7.1 </span>Getting started with pivot tables in pandas</a></div><div class="lev3 toc-item"><a href="#Task" data-toc-modified-id="Task-272"><span class="toc-item-num">2.7.2 </span>Task</a></div>
# Exercise notebook 4: Grouping your data
This Jupyter notebook, for Week 4 of The Open University's [_Learn to code for Data Analysis_](http://futurelearn.com/courses/learn-to-code) course, contains code examples and coding activities for you.
In Week 4, you'll come across steps directing you to this notebook. Once you've done the exercise, go back to FutureLearn to discuss it with your fellow learners and course facilitators and mark it as complete. Remember to run the code in this notebook before you start.
# Env
End of explanation
LOCATION='comtrade_milk_uk_monthly_14.csv'
Explanation: Exercise 1: Getting Comtrade data into your notebook
In this exercise, you will practice loading data from Comtrade into a pandas dataframe and getting it into a form where you can start to work with it.
The following steps and code are an example. Your task for this exercise is stated at the end, after the example.
The data is obtained from the United Nations Comtrade website, by selecting the following configuration:
Type of Product: goods
Frequency: monthly
Periods: all of 2014
Reporter: United Kingdom
Partners: all
Flows: imports and exports
HS (as reported) commodity codes: 0401 (Milk and cream, neither concentrated nor sweetened) and 0402 (Milk and cream, concentrated or sweetened)
Clicking on 'Preview' results in a message that the data exceeds 500 rows. Data was downloaded using the Download CSV button and the download file renamed appropriately.
End of explanation
# LOCATION = 'http://comtrade.un.org/api/get?max=5000&type=C&freq=M&px=HS&ps=2014&r=826&p=all&rg=1%2C2&cc=0401%2C0402&fmt=csv'
Explanation: A URL for downloading all the data as a CSV file can also be obtained via "View API Link".
It must be modified so that it returns up to 5000 records (set max=5000) in the CSV format (&fmt=csv).
End of explanation
milk = read_csv(LOCATION, dtype={'Commodity Code':str, 'Reporter Code':str})
milk.head(3)
Explanation: Load the data in from the specified location, ensuring that the various codes are read as strings. Preview the first few rows of the dataset.
End of explanation
COLUMNS = ['Year', 'Period','Trade Flow','Reporter', 'Partner', 'Commodity','Commodity Code','Trade Value (US$)']
milk = milk[COLUMNS]
Explanation: Limit the columns to make the dataframe easier to work with by selecting just a subset of them.
End of explanation
milk_world = milk[milk['Partner'] == 'World']
milk_countries = milk[milk['Partner'] != 'World']
Explanation: Derive two new dataframes that separate out the 'World' partner data and the data for individual partner countries.
End of explanation
milk_world.to_csv('worldmilk.csv', index=False)
milk_countries.to_csv('countrymilk.csv', index=False)
Explanation: You may wish to store a local copy as a CSV file, for example:
End of explanation
world_test = read_csv('worldmilk.csv', dtype={'Commodity Code':str, 'Reporter Code':str}, encoding='latin-1')
world_test.head(2)
load_test = read_csv('countrymilk.csv', dtype={'Commodity Code':str, 'Reporter Code':str}, encoding='latin-1')
load_test.head(2)
Explanation: To load the data back in:
End of explanation
milk_imports = milk[milk['Trade Flow'] == 'Imports']
milk_countries_imports = milk_countries[milk_countries['Trade Flow'] == 'Imports']
milk_world_imports = milk_world[milk_world['Trade Flow'] == 'Imports']
len(milk_imports) == len(milk_countries_imports) + len(milk_world_imports)
Explanation: If you are on a Windows computer, data files may sometimes be saved using a file encoding (Latin-1). Pandas may not recognise this by default, in which case you will see a UnicodeDecodeError.
In such cases, opening files in read_excel() or read_csv() using the parameter encoding="ISO-8859-1" or encoding = "Latin-1" should fix the problem. For example, edit the previous command to read:
load_test=read_csv('countrymilk.csv', dtype={'Commodity Code':str}, encoding = "ISO-8859-1")
Subsetting Your Data
For large or heterogenous datasets, it is often convenient to create subsets of the data. To further separate out the imports:
End of explanation
milkImportsInJanuary2014 = milk_countries_imports[milk_countries_imports['Period'] == 201401]
milkImportsInJanuary2014.sort_values('Trade Value (US$)',ascending=False).head(10)
Explanation: Sorting the data
Having loaded in the data, find the most valuable partners in terms of import trade flow during a particular month by sorting the data by decreasing trade value and then selecting the top few rows.
End of explanation
ind = read_csv("comtrade_milk_in_monthly_16.csv", dtype={'Commodity Code':str, 'Reporter Code':str})
ind.head()
len(ind)
ind['Commodity'].unique()
ind.columns
COLUMNS = ['Year', 'Period','Trade Flow','Reporter', 'Partner', 'Commodity','Commodity Code','Trade Value (US$)']
ind = ind[COLUMNS]
ind.head()
ind_world = ind[ind['Partner'] == 'World']
ind_countries = ind[ind['Partner'] != 'World']
ind_exports = ind[ind['Trade Flow'] == 'Exports']
ind_countries_exports = ind_countries[ind_countries['Trade Flow'] == 'Exports']
ind_world_exports = ind_world[ind_world['Trade Flow'] == 'Exports']
ind_countries_exports.sort_values('Trade Value (US$)',ascending=False).head(10)
Explanation: Task
To complete these tasks you could copy this notebook and amend the code or create a new notebook to do the analysis for your chosen data.
Using the Comtrade Data website, identify a dataset that describes the import and export trade flows for a particular service or form of goods between your country (as reporter) and all ('All') the other countries in the world. Get the monthly data for all months in 2014.
Download the data as a CSV file and add the file to the same folder as the one containing this notebook. Load the data in from the file into a pandas dataframe. Create an easier to work with dataframe that excludes data associated with the 'World' partner. Sort this data to see which countries are the biggest partners in terms of import and export trade flow.
End of explanation
groups = milk_countries.groupby('Trade Flow')
groups.groups.keys()
groups.head()
Explanation: Now go back to the 'Practice getting data' step in FutureLearn to discuss and mark it complete.
Exercise 2: Grouping data
On many occasions, a dataframe may be organised as groups of rows where the group membership is identified based on cell values within one or more 'key' columns. Grouping refers to the process whereby rows associated with a particular group are collated so that you can work with just those rows as distinct subsets of the whole dataset.
The number of groups the dataframe will be split into is based on the number of unique values identified within a single key column, or the number of unique combinations of values for two or more key columns.
The groupby() method runs down each row in a data frame, splitting the rows into separate groups based on the unique values associated with the key column or columns.
The following is an example of the steps and code needed to split the dataframe from the Exercise 1 example.
Grouping the data
Split the data into two different subsets of data (imports and exports), by grouping on trade flow.
End of explanation
groups.get_group('Imports').head()
Explanation: Inspect the first few rows associated with a particular group:
End of explanation
GROUPING_COMMFLOW = ['Commodity Code','Trade Flow']
groups = milk_countries.groupby(GROUPING_COMMFLOW)
groups.groups.keys()
Explanation: As well as grouping on a single term, you can create groups based on multiple columns by passing in several column names as a list. For example, generate groups based on commodity code and trade flow, and then preview the keys used to define the groups.
End of explanation
GROUPING_PARTNERFLOW = ['Partner','Trade Flow']
groups = milk_countries.groupby(GROUPING_PARTNERFLOW)
GROUP_PARTNERFLOW= ('France','Imports')
groups.get_group( GROUP_PARTNERFLOW ).head()
Explanation: Retrieve a group based on multiple group levels by passing in a tuple that specifies a value for each index column. For example, if a grouping is based on the 'Partner' and 'Trade Flow' columns, the argument of get_group has to be a partner/flow pair, like ('France', 'Import') to get all rows associated with imports from France.
End of explanation
groups = milk_countries.groupby(['Commodity Code'])
groups.get_group('0402').sort_values("Trade Value (US$)", ascending=False).head()
Explanation: To find the leading partner for a particular commodity, group by commodity, get the desired group, and then sort the result.
End of explanation
ind.columns
groups = ind.groupby('Commodity Code')
groups.get_group('0401').sort_values('Trade Value (US$)', ascending=False).head()
groups = ind.groupby(['Trade Flow', 'Commodity', 'Year'])
GROUPING_IND = ('Exports',
'Milk and cream; not concentrated nor containing added sugar or other sweetening matter',
2016,)
groups.get_group(GROUPING_IND).sort_values('Trade Value (US$)', ascending=False).head()
GROUPING_IND = ('Imports',
'Milk and cream; not concentrated nor containing added sugar or other sweetening matter',
2016,)
groups.get_group(GROUPING_IND)\
.sort_values('Trade Value (US$)', ascending=False)\
.head()
Explanation: Task
Using your own data set from Exercise 1, try to group the data in a variety of ways, finding the most significant trade partner in each case:
by commodity, or commodity code
by trade flow, commodity and year.
End of explanation
milk_world_imports.groupby('Commodity Code')['Trade Value (US$)'].aggregate(sum)
Explanation: Now go back to the 'Splitting a dataset by grouping' step in FutureLearn to discuss and mark it complete.
Exercise 3: Experimenting with Split-Apply-Combine – Summary reports
Having learned how to group data using the groupby() method, you will now start to put those groups to work.
Aggregation operations – Generating Summary reports
Aggegration operations can be invoked using the aggregate() method.
To find the total value of imports traded for each commodity within the period, take the world dataframe, and sum the values over the trade value column within each grouping.
End of explanation
milk_imports_grouped=milk_countries_imports.groupby('Commodity Code')
milk_imports_grouped['Trade Value (US$)'].aggregate(sum)
Explanation: So that's 222 million dollars or so on the 0401 commodity, and 341 million dollars or so on 0402.
If you total (sum) up all the individual country contributions, you should get similar amounts.
End of explanation
milk_countries_imports_totals=milk_countries_imports.groupby('Partner')[['Trade Value (US$)']].aggregate(sum)
milk_countries_imports_totals.sort_values('Trade Value (US$)', ascending=False).head()
Explanation: Not far off – there are perhaps a few rounding errors that would account for the odd couple of million that appear to be missing...
Finding top ranked elements within a group
To find the leading import partners across all the milk products, group by partner, sum (total) the trade value within each group, and then sort the result in descending order before displaying the top few entries.
End of explanation
milk_imports_grouped['Trade Value (US$)'].aggregate(sum).plot(kind='barh');
Explanation: Generating simple charts
One of the useful features of the aggregate() method is that it returns an object that can be plotted from directly, in this example a horizontal bar chart.
End of explanation
monthlies=milk_countries_imports.groupby(['Commodity','Trade Flow','Period'])['Trade Value (US$)'].aggregate(sum)
monthlies
Explanation: Generating alternative groupings
Reports can also be generated to show the total imports per month for each commodity: group on commodity, trade flow and period, and then sum the trade values contained within each group.
End of explanation
from numpy import mean
GROUPING_COMMFLOWPERIOD=['Commodity','Trade Flow','Period']
milk_countries.groupby(GROUPING_COMMFLOWPERIOD)['Trade Value (US$)'].aggregate([sum, min, max, mean])
Explanation: The groupby() method splits the data into separate distinct groups of rows, and then the aggregate() method takes each group of rows from the results of the groupby() operation, applies the specified aggregation function, and then combines the results in the output.
The aggregation function itself is applied to all columns of an appropriate type. In the example, the only numeric column that makes sense to aggregate over is the trade value column.
As well as built in summary operations, such as finding the total (sum), or maximum or minimum value in a group (max, min), aggregating functions imported from other Python packages can also be used. As shown in the next example, the numpy package has a function mean that will calculate the mean (simple average) value for a set of values.
Generating several aggregation values at the same time
To generate several aggregate reports in a single line of code, provide a list of several aggregating operations to the aggregate() method:
End of explanation
milk_countries_imports.groupby(['Commodity Code','Period'])['Trade Value (US$)'].aggregate(max).plot(kind='barh');
Explanation: By combining different grouping combinations and aggregate functions, you can quickly ask a range of questions over the data or generate a wide variety of charts from it.
Sometimes, however, it can be quite hard to see any 'outstanding' values in a complex pivot table. In such cases, a chart may help you see which values are significantly larger or smaller than the other values.
For example, plot the maximum value by month across each code/period combination to see which month saw the maximum peak flow of imports from a single partner.
End of explanation
milk_bypartner_total=milk[milk["Trade Flow"]=='Imports'].groupby(['Partner'])['Trade Value (US$)'].aggregate(sum)
milk_bypartner_total.sort_values().head(10).plot(kind='barh');
## Does not work
# milk_bypartner_total.sort_values('Trade Value (US$)',inplace=False,ascending=False).head(5).plot(kind='barh')
#milk_bypartner_total.order('Trade Value (US$)',ascending=False).head(5).plot(kind='barh')
Explanation: For the 0401 commodity, the largest single monthly trade flow in 2014 appears to have taken place in September (201409). For the 0402 commodity, the weakest month was December, 2014.
To chart the mean trade flows by month, simply aggregate on the mean rather than the max.
In some cases, you might want to sort the order of the bars in a bar chart by value. By default, the sort() operator sorts a series or dataframe 'in place'. That is, it sorts the dataframe and doesn't return anything. Use the inplace=False parameter to return the sorted values so that the plot function can work on them, or alternatively use the order() function.
The following chart displays the total imports for the combined commodities by partner (including the World partner) for the top five partners: the sort() element sorts the values in descending order, passes them to the head() element, which selects the top five and passes those onto the plotting function.
End of explanation
ind_exports = ind[ind['Trade Flow'] == 'Exports'].groupby(['Period'])['Trade Value (US$)'].aggregate(sum)
ind_exports.sort_values().head(15).plot(kind='barh');
ind_imports = ind[ind['Trade Flow'] == 'Imports'].groupby(['Period'])['Trade Value (US$)'].aggregate(sum)
ind_imports.sort_values().head(15).plot(kind='barh');
ind_exports_partner = ind[ind['Trade Flow'] == 'Exports'].groupby(['Partner'])['Trade Value (US$)'].aggregate(sum)
ind_exports_partner.sort_values().head(10).plot(kind='barh');
Explanation: Tasks
For the 0402 trade item, which months saw the greatest average (mean) activity? How does that compare with the maximum flows in each month? How does it compare with the total flow in each month?
Download your own choice of monthly dataset over one or two years containing both import and export data. (To start with, you may find it convenient to split the data into two dataframes, one for exports and one for imports.)
Using your own data:
find out which months saw the largest total value of imports, or exports?
assess, by eye, if there appears to be any seasonal trend in the behaviour of imports or exports?
plot a bar chart showing the top three importers or exporters of your selected trade item over the period you grabbed the data for, compared to the total world trade value.
End of explanation
df = DataFrame({'Commodity' : ['Fish', 'Milk', 'Eggs', 'Fish', 'Milk'],
'Trade Flow' : ['Import', 'Import', 'Import', 'Export','Export'],
'Value' : [1,2,4,8,16]})
df
Explanation: Now go back to the 'Summary operations' step in FutureLearn to discuss and mark it complete.
Exercise 4: Filtering groups
If you have a large dataset that can be split into multiple groups but for which you only want to report on groups that have a particular property, the filter() method can be used to apply a test to a group and only return rows from groups that pass a particular group-wide test. If the test evaluates as False, the rows included in that group will be ignored.
Consider the following simple test dataset:
End of explanation
def groupsOfThreeOrMoreRows(g):
return len(g) >= 3
df.groupby('Trade Flow').filter(groupsOfThreeOrMoreRows)
Explanation: One reason for filtering a dataset might be to exclude 'sparse' or infrequently occurring items, such as trade partners who only seem to trade for less than six months of the year.
To select just the groups that contain more than a certain number of rows, define a function to test the length (that is, the number of rows) of each group and return a True or False value depending on the test.
In the following case, group by trade flow and only return rows from groups containing three or more rows.
End of explanation
def groupsWithValueGreaterThanFive(g):
return g['Value'].sum() > 5
df.groupby('Commodity').filter(groupsWithValueGreaterThanFive)
Explanation: You can also select groups based on other group properties. For example, you might select just the groups where the total value for a particular column within a group exceeds a certain threshold.
In the following case, select just those commodities where the sum of import and export values is greater than a certain amount to indicate which ones have a large value of trade, in whatever direction, associated with them. First group by the commodity, then filter on the group property of interest.
End of explanation
def groupsWithImportsOver25million(g):
return g['Trade Value (US$)'].sum() > 25000000
rows=milk_countries_imports.groupby(['Commodity','Partner']).filter(groupsWithImportsOver25million)
Explanation: Filtering on the Comtrade data
Now try filtering the Comtrade data relating to the milk imports. Start by creating a subset of the data containing only rows where the total trade value of imports for a particular commodity and partner is greater than $25 million (that is, 25000000).
End of explanation
rows.groupby(['Commodity','Partner'])['Trade Value (US$)'].aggregate(sum)
Explanation: Check the filtering by grouping on the commodity and partner and summing the result.
End of explanation
# rows.groupby(['Commodity','Partner'])['Trade Value (US$)'].aggregate(sum).sort_values('Trade Value (US$)',inplace=False,ascending=False).plot(kind='barh');
rows.groupby(['Commodity','Partner'])['Trade Value (US$)']\
.aggregate(sum)\
.sort_values(inplace=False,ascending=False)\
.plot(kind='barh');
Explanation: As before, you can plot the results.
End of explanation
def weakpartner(g):
return len(g)<=3 | g['Trade Value (US$)'].sum()<25000
weak_milk_countries_imports=milk_countries_imports.groupby(['Commodity','Partner']).filter(weakpartner)
weak_milk_countries_imports.groupby(['Commodity','Partner'])[['Trade Value (US$)']].aggregate([len,sum])
Explanation: Logical tests can be combined in a filter function, for example testing for partners that only appear to trade infrequently or for small total amounts in any particular commodity.
End of explanation
def twomillionOrMore(g):
return len(g)<=3 | g['Trade Value (US$)'].sum() > 2000000
milk_countries_exports= milk_countries[milk_countries['Trade Flow'] == 'Exports']
milk_countries_exports_partner_period=milk_countries_exports.groupby(['Partner', 'Period']).filter(twomillionOrMore)
milk_countries_exports_partner_period.head()
Explanation: In this report, many of the listed countries appear to have traded in only one or two months; but while Hungary traded concentrated/sweetened products eight times, the total trade value was not very significant at all.
Tasks
Filter the dataset so that it only contains rows where the total exports across all the milk products for a particular country are at least two million dollars in any given monthly period. (HINT: group on partner and period and filter against a function that tests the minimum trade value exceeds the required value.)
Generate a chart from that dataset that displays the sum total trade value for each partner. (HINT: group on the partner and then aggregate on the sum.)
Using your own monthly data for a single year, which countries only trade in your selected trade item rarely or for small amounts? Which partners trade on a regular basis (for example, in at least nine of the months)?
Can you also find countries that trade regularly but only for small amounts (for example whose maximum monthly trade value is less than a certain threshold amount) or who trade infrequently but for large amounts (or other combinations thereof)?
Task#1
Filter the dataset so that it only contains rows where the total exports across all the milk products for a particular country are at least two million dollars in any given monthly period. (HINT: group on partner and period and filter against a function that tests the minimum trade value exceeds the required value.)
End of explanation
milk_countries_exports_partner_period\
.groupby(['Partner'])[['Trade Value (US$)']]\
.aggregate([sum])\
.plot(kind='barh');
Explanation: Task#2
Generate a chart from that dataset that displays the sum total trade value for each partner. (HINT: group on the partner and then aggregate on the sum.)
End of explanation
def weakpartner(g):
return len(g)<=3 | g['Trade Value (US$)'].sum()<25000
milk_countries_exports_commodity_partner=milk_countries_exports.groupby(['Commodity','Partner']).filter(weakpartner)
milk_countries_exports_commodity_partner.sort_values('Trade Value (US$)', ascending=False).head()
def regularpartners(g):
return len(g)>=9
milk_countries_exports_reg_partners=milk_countries_exports.groupby(['Period'])[['Partner']]\
.filter(regularpartners)
milk_countries_exports_reg_partners.head()
Explanation: Task#3
Using your own monthly data for a single year, which countries only trade in your selected trade item rarely or for small amounts? Which partners trade on a regular basis (for example, in at least nine of the months)?
End of explanation
def regularButLowPartners(g):
return len(g)>=10 & (g['Trade Value (US$)'].agg(mean) < 25000)
milk_countries_exports_reg_low=milk_countries_exports.groupby(['Partner'])\
.filter(regularButLowPartners)
milk_countries_exports_reg_low.head()
def irregularButHighPartners(g):
return len(g) <=5 & (g['Trade Value (US$)'].agg(mean) < 2500000)
milk_countries_exports_irreg_high=milk_countries_exports.groupby(['Partner'])\
.filter(irregularButHighPartners)
milk_countries_exports_irreg_high.head()
Explanation: Task#4
Can you also find countries that trade regularly but only for small amounts (for example whose maximum monthly trade value is less than a certain threshold amount) or who trade infrequently but for large amounts (or other combinations thereof)?
End of explanation
from IPython.display import HTML,IFrame
IFrame('comtrade_pivot.html',width=1000,height=600)
Explanation: Now go back to the 'Filtering groups' step in FutureLearn to discuss and mark it complete.
Exercise 5: Interactive pivot table
The interactive pivot table contains a fragment of the milk data downloaded from Comtrade relating to the leading partner importers of milk products to the UK. (Note: If you can't see the pivot table, check you have downloaded it to the same folder as this notebook and run the cell below.)
Configure the pivot table by dragging the labels into the appropriate row and column selection areas. (You do not need to add all the labels to those areas). Select the aggregation type using the calculation list (which defauts to count). Click on the down arrow associated with a label in order to select a subset of values associated with that label.
Use the interactive pivot table to generate reports that display:
a single column containing the total value of each trade flow for each commodity each year (in rows: Year, Commodity, Trade Flow; no columns; sum Trade Value(US$))
for each year and each commodity, a column containing the total trade value by Trade flow (rows: year, commodity; cols Trade Flow; sum trade value)
the total exports for each partner country (rows) by year (columns). Row: partner, trade flow with filter set to export); col: year; sum trade value
End of explanation
#Example dataframe
df = DataFrame({"Commodity":["A","A","A","A","B","B","B","C","C"],
"Amount":[10,15,5,20,10,10,5,20,30],
"Reporter":["P","P","Q","Q","P","P","Q","P","Q"],
"Flow":["X","Y","X","Y","X","Y","X","X","Y"]},
columns=["Commodity","Reporter","Flow","Amount"])
df
Explanation: Task
Try to come up with some of your own questions and then see if you can use the pivot table to answer them.
For example, see if you can use the table to find:
the total value by partner country of each commodity type (with each row corresponding to a particular country)
the total value of trade in commodity type for each month of the year
the leading partners associated with the 0402 commodity code
the minimum trade value, by month and commodity type, for Ireland.
Now go back to the 'Interactive pivot table' step in FutureLearn to discuss and mark it complete.
Exercise 6: Pivot tables with pandas
Pivot tables can be quite hard to understand, so if you want a gentle dataset to pratice with, here is the simple example dataset used in the previous step that you can try out a few pivot table functions on.
End of explanation
##Inspect the documentation for the pandas pivot_table() function
##Uncomment the following command (remove the #) and then click the play button in the toolbar to run the cell
#?pivot_table
##The documentation file should pop up from the bottom of the browser.
##Click the x to close it.
Explanation: Getting started with pivot tables in pandas
The pandas library provides a pivot_table() function into which you can pass the elements needed to define the pivot table view you would like to generate over a particular dataset.
If you inspect the documentation for the pandas pivot_table() function, you will see that it is quite involved (but DON'T PANIC!).
End of explanation
KEYPARTNERS = ['Belgium','France','Germany','Ireland','Netherlands','Denmark']
milk_keypartners = milk_countries[milk_countries['Partner'].isin(KEYPARTNERS)]
pivot_table(milk_keypartners,
index=['Year','Partner','Trade Flow','Commodity'],
values='Trade Value (US$)',
aggfunc=sum)
Explanation: You can start to use the pivot table quite straightforwardly, drawing inspiration from the way you configured the interactive pivot table. The function itself takes the form:
pd.pivot_table(DATAFRAME,
index= (LIST_OF_)DATA_COLUMN(S)_THAT_DEFINE_PIVOT_TABLE_ROWS,
columns= (LIST_OF_)DATA_COLUMN(S)_THAT_DEFINE_PIVOT_TABLE_COLUMNS
values= DATA_COLUMN_TO_APPLY_THE SUMMARYFUNCTION_TO,
aggfunc=sum
)
You can generate a pivot table that shows the total trade value as a single column, grouped into row based subdivisions based on year, country, trade flow and commodity in the following way.
The following pivot table reports on a subset of countries. The isin() method selects rows whose partner value 'is in' the list of specified partners.
End of explanation
#For convenience, let's assign the output of this pivot table operation to a variable...
report = pivot_table(milk_keypartners,
index=['Year','Partner','Commodity'],
columns='Trade Flow',
values='Trade Value (US$)',
aggfunc=sum)
#And then display the result, sorted by import value
report.sort_values('Imports', ascending=False)
Explanation: If you just want to use a single data column from the original dataset to specify the row (that is, the index) groupings or the column groupings, you don't need to use a list, just pass in the name of the appropriate original data column.
So, to look at rows grouped by year, country and commodity, and split columns out by trade flow:
End of explanation
report.sort_values('Imports').plot(kind='barh');
Explanation: One of the features of the interactive pivot table you did not explore was its ability to generate bar chart style views over the pivoted data as well as tabulated results. (In fact, this requires a plugin to the pivot table that has not been installed.)
In the same way that you produced charts from pandas dataframes previously, you can visualise the contents of the dataframe produced from the pivot table operation.
End of explanation |
14,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Score mechanism
input
Step1: Get each base breeds score
Step2: Model version 1 | Python Code:
# 52 base classes:
# source 2: classified dog names
breed_classes = pd.read_csv("s3://dogfaces/tensor_model/output_labels_20170907.txt",names=['breed'])
base_breeds = breed_classes['breed'].values
base_breeds
with open('breed_lookup.pickle', 'rb') as handle:
rev_to_breed = pickle.load(handle)
len(rev_to_breed)
with open('breed_dict.pickle', 'rb') as handle:
breed_to_rev = pickle.load(handle)
len(breed_to_rev)
# sanity check
not_found = 0
for breed in base_breeds:
if breed not in breed_to_rev:
if snowball.stem(breed) in breed_to_rev:
print "only need to stem "+breed
elif snowball.stem(breed) in rev_to_breed:
print "need to look up extened dict "+ breed +" : "+str(rev_to_breed[snowball.stem(breed)])
else:
print "not found " + breed
not_found += 1
print not_found
Explanation: Score mechanism
input: a probability vector of dog breeds top 3:
toy -> breed score(averaged score for that breed)
return probability weighted review scores
End of explanation
mini_set = df.sample(10).copy()
base_breeds_set = set(base_breeds)
# review_id, toy_id, breeds.....
def get_breed_score(df):
score_df = []
for idx, row in df.iterrows():
score_row = {}
score_row['review_id'] = row['review_id']
score_row['toy_id'] = row['toy_id']
score_row['rating'] = row['rating']
try:
breed_extract = row['breed_extract'].split(',')
matched_item = {}
for b in breed_extract:
if b in base_breeds_set:
matched_item[b] = matched_item.get(b,0)+1
max_p = max(matched_item.values())
total_base = 0
for k, v in matched_item.iteritems():
if v== max_p:
total_base += 1
for k, v in matched_item.iteritems():
if v == max_p:
score_row[k] = 1.0/total_base
except:
pass
score_df.append(score_row)
return score_df
scored_lst = get_breed_score(df)
scored_df = pd.DataFrame(scored_lst)
scored_df.info()
scored_df.fillna(0, inplace=True)
scored_df.head()
save_data = scored_df.to_csv(index=False)
s3_res = boto3.resource('s3')
s3_res.Bucket('dogfaces').put_object(Key='reviews/scored_breed_review.csv', Body=save_data)
# sanity check
scored_df = pd.read_csv("s3://dogfaces/reviews/scored_breed_review.csv")
scored_df.info()
Explanation: Get each base breeds score
End of explanation
# calculating each toy's score
#df_scored = scored_df.copy()
df_scored = scored_df.copy()
df_scored.pop('review_id')
df_scored.pop('rating')
def non_zero_count(x):
return np.sum(x[x>0])
df_breed_count = df_scored.groupby('toy_id').agg(non_zero_count).reset_index()
df_breed_count.head()
breed_columns = [x for x in scored_df.columns if x not in ['toy_id', 'rating', 'review_id']]
mat_scored2 = scored_df[breed_columns].copy().values
mat_scored2 = scored_df['rating'].values.reshape((61202,1))*mat_scored2
df_scored_sum = pd.DataFrame(data=mat_scored2, columns=breed_columns)
df_scored_sum = pd.concat([scored_df['toy_id'].copy(), df_scored_sum], axis=1)
df_breed_wet_sum = df_scored_sum.groupby('toy_id').sum().reset_index()
df_breed_wet_sum.head()
df_breed_wet_sum.sort_values(by='toy_id', axis=0, inplace=True)
df_breed_count.sort_values(by='toy_id', axis=0, inplace=True)
weighted_mat = df_breed_count[breed_columns].values
weighted_sum = df_breed_wet_sum[breed_columns].values
with np.errstate(divide='ignore', invalid='ignore'):
res_mat = np.true_divide(weighted_sum, weighted_mat)
res_mat[res_mat==np.inf]=0
res_mat = np.nan_to_num(res_mat)
df_scored_finalscore = pd.DataFrame(data=res_mat, columns=breed_columns)
df_scored_finalscore = pd.concat([df_breed_count['toy_id'].copy(), df_scored_finalscore], axis=1)
df_scored_finalscore.head()
df_toy = pd.read_csv("s3://dogfaces/reviews/toys.csv")
df_toy.head(3)
# make recommendations:
def getRecommendations(probs, score_df, toy_df, k, add_info=None):
# probs is a dictionary
keys = probs.keys()
D = score_df.shape[1]-1
prob_v = np.array(probs.values()).reshape((D,1))
score_mat = score_df[keys].values
fscore_mat = score_mat.dot(prob_v)
top_ind = np.argsort(-fscore_mat[:,0])[:k]
top_toy = score_df['toy_id'].values[top_ind]
likely_ratings = pd.DataFrame({"likely rating":fscore_mat[:,0][top_ind]}, index=None)
if not add_info:
toy_info = toy_df[toy_df['toy_id'].isin(top_toy)][['toy_id','toy_name','price']].copy()
else:
add_info.extend(['toy_id','toy_name','price'])
toy_info = toy_df[toy_df['toy_id'].isin(top_toy)][add_info].copy()
return pd.concat([toy_info.reset_index(), likely_ratings], axis=1)
def getRecommendedToys():
pass
def getToyDislie():
pass
# get recommendations
for i in xrange(53):
probs = [0]*53
ind = i#np.random.randint(53)
probs[ind]=1
print breed_columns[ind]
test_input = dict(zip(breed_columns, probs))
print getRecommendations(test_input,df_scored_finalscore, df_toy, 3, ['toy_link'] )
time.sleep(2)
save_data = df_scored_finalscore.to_csv(index=False)
s3_res = boto3.resource('s3')
s3_res.Bucket('dogfaces').put_object(Key='reviews/scored_breed_toy.csv', Body=save_data)
Explanation: Model version 1: average
End of explanation |
14,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dénes Csala, MCC, Kolozsvár, 2021
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Dimensionality Reduction
Step1: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset
Step2: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution
Step3: To see what these numbers mean, let's view them as vectors plotted on top of the data
Step4: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance
Step5: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression
Step6: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works
Step7: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector
Step8: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum
Step9: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components
Step10: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components
Step11: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: Dénes Csala, MCC, Kolozsvár, 2021
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Dimensionality Reduction: Principal Component Analysis in-depth
Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique.
We'll start with our standard set of initial imports:
End of explanation
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
Explanation: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
Explanation: To see what these numbers mean, let's view them as vectors plotted on top of the data:
End of explanation
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
End of explanation
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
Explanation: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(X[0][:8])
print(X[0][8:16])
print(X[0][16:24])
print(X[0][24:32])
print(X[0][32:40])
print(X[0][40:48])
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
(1797*2)/(1797*64)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
Explanation: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.
Application of PCA to Digits
The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
def plot_image_components(x, coefficients=None, mean=0, components=None,
imshape=(8, 8), n_components=6, fontsize=12):
if coefficients is None:
coefficients = x
if components is None:
components = np.eye(len(coefficients), len(x))
mean = np.zeros_like(x) + mean
fig = plt.figure(figsize=(1.2 * (5 + n_components), 1.2 * 2))
g = plt.GridSpec(2, 5 + n_components, hspace=0.3)
def show(i, j, x, title=None):
ax = fig.add_subplot(g[i, j], xticks=[], yticks=[])
ax.imshow(x.reshape(imshape), interpolation='nearest')
if title:
ax.set_title(title, fontsize=fontsize)
show(slice(2), slice(2), x, "True")
approx = mean.copy()
show(0, 2, np.zeros_like(x) + mean, r'$\mu$')
show(1, 2, approx, r'$1 \cdot \mu$')
for i in range(0, n_components):
approx = approx + coefficients[i] * components[i]
show(0, i + 3, components[i], r'$c_{0}$'.format(i + 1))
show(1, i + 3, approx,
r"${0:.2f} \cdot c_{1}$".format(coefficients[i], i + 1))
plt.gca().text(0, 1.05, '$+$', ha='right', va='bottom',
transform=plt.gca().transAxes, fontsize=fontsize)
show(slice(2), slice(-2, None), approx, "Approx")
with plt.style.context('seaborn-white'):
plot_image_components(digits.data[0])
Explanation: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector: in the case of the digits, our data is
$$
x = [x_1, x_2, x_3 \cdots]
$$
but what this really means is
$$
image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots
$$
If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:
End of explanation
def plot_pca_interactive(data, n_components=6):
from sklearn.decomposition import PCA
from ipywidgets import interact
pca = PCA(n_components=n_components)
Xproj = pca.fit_transform(data)
def show_decomp(i=0):
plot_image_components(data[i], Xproj[i],
pca.mean_, pca.components_)
interact(show_decomp, i=(0, data.shape[0] - 1));
plot_pca_interactive(digits.data)
Explanation: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:
End of explanation
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
Explanation: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components:
End of explanation
fig, axes = plt.subplots(8, 8, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
pca = PCA(i + 1).fit(X)
im = pca.inverse_transform(pca.transform(X[25:26]))
ax.imshow(im.reshape((8, 8)), cmap='binary')
ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
Explanation: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components:
End of explanation
from ipywidgets import interact
def plot_digits(n_components):
fig = plt.figure(figsize=(8, 8))
plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])
nside = 10
pca = PCA(n_components).fit(X)
Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))
Xproj = np.reshape(Xproj, (nside, nside, 8, 8))
total_var = pca.explained_variance_ratio_.sum()
im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])
for i in range(nside)])
plt.imshow(im)
plt.grid(False)
plt.title("n = {0}, variance = {1:.2f}".format(n_components, total_var),
size=18)
plt.clim(0, 16)
interact(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);
Explanation: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once:
End of explanation |
14,144 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
Step2: <i class="fa fa-book"></i> Primero librerias
Step3: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
<i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
Step4: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint
Step5: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
<i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML
Step6: cuales son los tamanios de estos nuevos datos?
y ahora entrenamos nuestro modelo y checamos el error
<i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
Step7: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
Step8: a probarlo!
mejoro?
Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que
Step9: Actividad | Python Code:
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = os.getcwd()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base,'files/custom.css'),'r').read())
return HTML(styles)
css_styling()
Explanation: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
End of explanation
import numpy as np
import sklearn as sk
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import seaborn as sns
%matplotlib inline
Explanation: <i class="fa fa-book"></i> Primero librerias
End of explanation
from sklearn.tree import DecisionTreeClassifier
Explanation: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
<i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
End of explanation
help(clf)
Explanation: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint: usa help(cosa)!
End of explanation
from sklearn.cross_validation import train_test_split
Explanation: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
<i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML:
Crear un modelo
Particionar tus datos en diferentes pedazos (10% entrenar y 90% prueba)
Entrenar tu modelo sobre cada pedazo de los datos
Escogete el mejor modelo o el promedio de los modelos
Predice!
Primero vamos a particionar los datos usando
End of explanation
from sklearn.cross_validation import cross_val_score
Explanation: cuales son los tamanios de estos nuevos datos?
y ahora entrenamos nuestro modelo y checamos el error
<i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
End of explanation
from sklearn.ensemble import RandomForestClassifier
Explanation: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
End of explanation
g = sns.PairGrid(iris, hue="species")
g = g.map(plt.scatter)
g = g.add_legend()
Explanation: a probarlo!
mejoro?
Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que :
Definir nuestro rango de arboles a probar en un arreglo
hacer un for loop sobre este arreglo
Para cada elemento, entrena un bosque y saca el score
Guarda el score en una lista
graficalo!
<i class="fa fa-pagelines"></i> El conjunto de datos Iris
Un modelo multi-dimensional
End of explanation
iris = datasets.load_iris()
X = iris.data
Y = iris.target
Explanation: Actividad:
Objetivo: Entrena un arbol para predecir la especie de la planta
Checa las graficas, que variables podrian ser mas importante?
Agarra los datos, que dimensiones son?
Rompelos en pedacitos y entrena tus modelos
Que scores te da? Que resulto ser importante?
End of explanation |
14,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linking Plots Using Brush Interval Selector
Details on how to use the brush interval selector can be found in this notebook.
Brush interval selectors can be used where continuous updates are not desirable (for example, in callbacks performing slower computations)
The boolean trait brushing can be used to control continuous updates in the BrushSelector. brushing will be set to False when the interval selector is not brushing. We can register callbacks by listening to the brushing trait of the brush selector. We can check the value of brushing trait in the callback and perform updates only at the end of brushing.
Let's now look at an example of linking a time series plot to a scatter plot using a BrushIntervalSelector
Step1: Let's set up an interval selector on a figure containing two time series plots. The interval selector can be activated by clicking on the figure
Step2: Let's now create a scatter plot of the two time series and stack it below the time series plot using a VBox | Python Code:
import numpy as np
from ipywidgets import Layout, HTML, VBox
import bqplot.pyplot as plt
Explanation: Linking Plots Using Brush Interval Selector
Details on how to use the brush interval selector can be found in this notebook.
Brush interval selectors can be used where continuous updates are not desirable (for example, in callbacks performing slower computations)
The boolean trait brushing can be used to control continuous updates in the BrushSelector. brushing will be set to False when the interval selector is not brushing. We can register callbacks by listening to the brushing trait of the brush selector. We can check the value of brushing trait in the callback and perform updates only at the end of brushing.
Let's now look at an example of linking a time series plot to a scatter plot using a BrushIntervalSelector
End of explanation
from bqplot.interacts import BrushIntervalSelector
y1, y2 = np.random.randn(2, 200).cumsum(axis=1) # two simple random walks
fig_layout = Layout(width="900px", height="500px")
time_series_fig = plt.figure(layout=fig_layout)
line = plt.plot([y1, y2])
# create a fast interval selector by passing in the X scale and the line mark on which the selector operates
intsel = BrushIntervalSelector(marks=[line], scale=line.scales["x"])
time_series_fig.interaction = intsel # set the interval selector on the figure
Explanation: Let's set up an interval selector on a figure containing two time series plots. The interval selector can be activated by clicking on the figure
End of explanation
scat_fig = plt.figure(
layout=fig_layout,
animation_duration=750,
title="Scatter of time series slice selected by the interval selector",
)
# set the x and y attributes to the y values of line.y
scat = plt.scatter(*line.y, colors=["red"], stroke="black")
# define a callback for the interval selector
def update_scatter(*args):
brushing = intsel.brushing
# update scatter *only* when the interval selector
# is not brushing to prevent continuous updates
if not brushing:
# interval selector is active
if line.selected is not None:
# get the start and end indices of the interval
start_ix, end_ix = line.selected[0], line.selected[-1]
else: # interval selector is *not* active
start_ix, end_ix = 0, -1
# update the x and y attributes of the scatter by slicing line.y
with scat.hold_sync():
scat.x, scat.y = line.y[:, start_ix:end_ix]
# register the callback with brushing trait of interval selector
intsel.observe(update_scatter, "brushing")
help_label = HTML(
'<div style="color: blue; font-size: 16px; margin:20px 0px 0px 50px">\
Brush on the time series plot to activate the interval selector</div>'
)
VBox([help_label, time_series_fig, scat_fig])
Explanation: Let's now create a scatter plot of the two time series and stack it below the time series plot using a VBox
End of explanation |
14,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Feature Engineering in Keras
Learning Objectives
Process temporal feature columns in Keras
Use Lambda layers to perform feature engineering on geolocation features
Create bucketized and crossed feature columns
Overview
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
We will start by importing the necessary libraries for this lab.
Step1: Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
Let's check the files look like we expect them to.
Step2: Create an input pipeline
Typically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
Step3: Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
Step4: We'll build our DNN model and inspect the model architecture.
Step5: Train the model
To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
Step6: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
Step7: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
Step8: Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types
Step9: Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
Step10: Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.
Step11: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling longitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
Step12: Putting it all together
We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
Step13: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.
Step14: Let's see how our model architecture has changed now.
Step15: As before, let's visualize the DNN model layers.
Step16: Let's a prediction with this new model with engineered features on the example we had above. | Python Code:
import datetime
import logging
import os
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models
# set TF error log verbosity
logging.getLogger("tensorflow").setLevel(logging.ERROR)
print(tf.version.VERSION)
Explanation: Advanced Feature Engineering in Keras
Learning Objectives
Process temporal feature columns in Keras
Use Lambda layers to perform feature engineering on geolocation features
Create bucketized and crossed feature columns
Overview
In this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides.
We will start by importing the necessary libraries for this lab.
End of explanation
!ls -l ../data/taxi-*.csv
!head ../data/taxi-*.csv
Explanation: Load taxifare dataset
The Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict.
Let's check the files look like we expect them to.
End of explanation
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
STRING_COLS = ["pickup_datetime"]
NUMERIC_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
DAYS = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
# A function to define features and labesl
def features_and_labels(row_data):
for unwanted_col in ["key"]:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a Pandas Dataframe
def load_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels) # features, label
if mode == "train":
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Create an input pipeline
Typically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.
End of explanation
# Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred): # Root mean square error
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname) for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation="relu", name="h1")(dnn_inputs)
h2 = layers.Dense(8, activation="relu", name="h2")(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation="linear", name="fare")(h2)
model = models.Model(inputs, output)
# compile model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
Explanation: Create a Baseline DNN Model in Keras
Now let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.
End of explanation
model = build_dnn_model()
tf.keras.utils.plot_model(
model, "dnn_model.png", show_shapes=False, rankdir="LR"
)
Explanation: We'll build our DNN model and inspect the model architecture.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 7333 * 30
NUM_EVALS = 30
NUM_EVAL_EXAMPLES = 1571
trainds = load_dataset("../data/taxi-train*", TRAIN_BATCH_SIZE, "train")
evalds = load_dataset("../data/taxi-valid*", 1000, "eval").take(
NUM_EVAL_EXAMPLES // 1000
)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
)
Explanation: Train the model
To train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
We start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.
End of explanation
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx + 1)
plt.plot(history.history[key])
plt.plot(history.history[f"val_{key}"])
plt.title(f"model {key}")
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
plot_curves(history, ["loss", "mse"])
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.
End of explanation
model.predict(
{
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
"pickup_datetime": tf.convert_to_tensor(
["2010-02-08 09:17:00 UTC"], dtype=tf.string
),
},
steps=1,
)
Explanation: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.
End of explanation
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode("utf-8")
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in
)
Explanation: Improve Model Performance Using Feature Engineering
We now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation.
Temporal Feature Columns
We incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.
End of explanation
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff * londiff + latdiff * latdiff)
Explanation: Geolocation/Coordinate Feature Columns
The pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.
Recall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.
Computing Euclidean distance
The dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.
End of explanation
def scale_longitude(lon_column):
return (lon_column + 78) / 8.0
Explanation: Scaling latitude and longitude
It is very important for numerical variables to get scaled before they are "fed" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.
First, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitudinal value and then divide by 8 to return a scaled value.
End of explanation
def scale_latitude(lat_column):
return (lat_column - 37) / 8.0
Explanation: Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling longitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.
End of explanation
def transform(inputs, numeric_cols, string_cols, nbuckets):
print(f"Inputs before features transformation: {inputs.keys()}")
# Pass-through columns
transformed = inputs.copy()
del transformed["pickup_datetime"]
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ["pickup_longitude", "dropoff_longitude"]:
transformed[lon_col] = layers.Lambda(
scale_longitude, name=f"scale_{lon_col}"
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ["pickup_latitude", "dropoff_latitude"]:
transformed[lat_col] = layers.Lambda(
scale_latitude, name=f"scale_{lat_col}"
)(inputs[lat_col])
# add Euclidean distance
transformed["euclidean"] = layers.Lambda(euclidean, name="euclidean")(
[
inputs["pickup_longitude"],
inputs["pickup_latitude"],
inputs["dropoff_longitude"],
inputs["dropoff_latitude"],
]
)
feature_columns["euclidean"] = fc.numeric_column("euclidean")
# create bucketized features
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns["pickup_latitude"], latbuckets
)
b_dlat = fc.bucketized_column(
feature_columns["dropoff_latitude"], latbuckets
)
b_plon = fc.bucketized_column(
feature_columns["pickup_longitude"], lonbuckets
)
b_dlon = fc.bucketized_column(
feature_columns["dropoff_longitude"], lonbuckets
)
# create crossed columns
ploc = fc.crossed_column([b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column([b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets**4)
# create embedding columns
feature_columns["pickup_and_dropoff"] = fc.embedding_column(pd_pair, 100)
print(f"Transformed features: {transformed.keys()}")
print(f"Feature columns: {feature_columns.keys()}")
return transformed, feature_columns
Explanation: Putting it all together
We now create two new "geo" functions for our model. We create a function called "euclidean" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclidean distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.
End of explanation
NBUCKETS = 10
# DNN MODEL
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# input layer is all float except for pickup_datetime which is a string
inputs = {
colname: layers.Input(name=colname, shape=(), dtype="float32")
for colname in NUMERIC_COLS
}
inputs.update(
{
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string"
)
for colname in STRING_COLS
}
)
# transforms
transformed, feature_columns = transform(
inputs,
numeric_cols=NUMERIC_COLS,
string_cols=STRING_COLS,
nbuckets=NBUCKETS,
)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of [32, 8] just in like the BQML DNN
h1 = layers.Dense(32, activation="relu", name="h1")(dnn_inputs)
h2 = layers.Dense(8, activation="relu", name="h2")(h1)
# final output is a linear activation because this is regression
output = layers.Dense(1, activation="linear", name="fare")(h2)
model = models.Model(inputs, output)
# Compile model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
model = build_dnn_model()
Explanation: Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.
End of explanation
tf.keras.utils.plot_model(
model, "dnn_model_engineered.png", show_shapes=False, rankdir="LR"
)
trainds = load_dataset("../data/taxi-train*", TRAIN_BATCH_SIZE, "train")
evalds = load_dataset("../data/taxi-valid*", 1000, "eval").take(
NUM_EVAL_EXAMPLES // 1000
)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS + 3,
steps_per_epoch=steps_per_epoch,
)
Explanation: Let's see how our model architecture has changed now.
End of explanation
plot_curves(history, ["loss", "mse"])
Explanation: As before, let's visualize the DNN model layers.
End of explanation
model.predict(
{
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
"pickup_datetime": tf.convert_to_tensor(
["2010-02-08 09:17:00 UTC"], dtype=tf.string
),
},
steps=1,
)
Explanation: Let's a prediction with this new model with engineered features on the example we had above.
End of explanation |
14,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NCEM's 4D-STEM Basic Jupyter Notebook
Quickly process and investigate 4D-STEM data from the TitanX
To start
Step1: Import the data and reshape to 4D
Change dirName to the directory where your data lives
Change the fName to the full file name
Step2: Find the location of the zero beam and generate BF
Assumes the first diffraction pattern will have the least structure.
Use center of intensity to find pattern center
Step3: Investigate the data
Scroll back and forth in the two axes with update of current position on Bright Field image
Step4: Find the maximum intensity for every pixel in the diffraction pattern
Useful to see features close to the noise floor | Python Code:
dirName = r'C:\Users\Peter\Data\Te NP 4D-STEM'
fName = r'07_45x8 ss=5nm_spot11_CL=100 0p1s_alpha=4p63mrad_bin=4_300kV.dm4'
%matplotlib widget
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
import ncempy.io as nio
import ncempy.algo as nalgo
import ipywidgets as widgets
from ipywidgets import interact, interactive
Explanation: NCEM's 4D-STEM Basic Jupyter Notebook
Quickly process and investigate 4D-STEM data from the TitanX
To start:
Change the dirName and fName
Select Cell -- Run All
Scroll to bottom and investigate your data
End of explanation
#Load the data using ncempy
fPath = Path(dirName) / Path(fName)
with nio.dm.fileDM(fPath.as_posix()) as dm1:
im1 = dm1.getDataset(0)
scanI = int(dm1.allTags['.ImageList.2.ImageTags.Series.nimagesx'])
scanJ = int(dm1.allTags['.ImageList.2.ImageTags.Series.nimagesy'])
numkI = im1['data'].shape[2]
numkJ = im1['data'].shape[1]
data = im1['data'].reshape([scanJ,scanI,numkJ,numkI])
print('Data shape is {}'.format(data.shape))
Explanation: Import the data and reshape to 4D
Change dirName to the directory where your data lives
Change the fName to the full file name
End of explanation
fg1,ax1 = plt.subplots(3,1,figsize=(10,6))
ax1[0].imshow(data[0,0,:,:])
# Find center of intensity
cm0 = nalgo.moments.centroid(nalgo.moments.moments(data[0,0,:,:].astype(np.float64)))
cm0 = [int(ii) for ii in cm0] # change to integer
# Plot the first diffraction pattern and found center
ax1[0].plot(cm0[1],cm0[0],'rx')
ax1[0].legend(['Center of central beam'])
ax1[0].set(title='First diffraction pattern\nCenter = {}'.format(cm0))
# Generate a bright field image
box0 = 25
BF0 = np.sum(np.sum(data[:,:,cm0[0]-box0:cm0[0]+box0,cm0[1]-box0:cm0[1]+box0],axis=3),axis=2)
ax1[1].imshow(BF0)
ax1[1].set(title='Bright field image')
ax1[2].imshow(np.sum(data, axis=(2,3)))
ax1[2].set(title='Sum of all diffraction intensity')
fg1.tight_layout()
Explanation: Find the location of the zero beam and generate BF
Assumes the first diffraction pattern will have the least structure.
Use center of intensity to find pattern center
End of explanation
im1 = data[:,:,::1,::1]
fg1,(ax1,ax2) = plt.subplots(1,2,figsize=(8,8))
p1 = ax1.plot(4,4,'or')
p1 = p1[0]
ax1.imshow(BF0)
im2 = ax2.imshow(np.log(im1[4,4,:,:]+50))
#Updates the plots
def axUpdate(i,j):
p1.set_xdata(i)
p1.set_ydata(j)
im2.set_array(np.log(im1[j,i,:,:]+50))
ax1.set(title='Bright Field Image',xlabel='i',label='j')
ax2.set(title='Diffraction pattern (log(I))')
#Connect the function and the sliders
w = interactive(axUpdate, i=(0,BF0.shape[1]-1), j=(0,BF0.shape[0]-1))
wB = widgets.Button(
description='Save current DP',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip=''
)
def saveCurrentDP(a):
curI = w.children[0].get_interact_value()
curJ = w.children[1].get_interact_value()
im = Image.fromarray(data[curJ,curI,:,:])
outName = fPath.as_posix() + '_DP{}i_{}j.tif'.format(curI,curJ)
im.save(outName)
wB.on_click(saveCurrentDP)
display(w)
display(wB)
Explanation: Investigate the data
Scroll back and forth in the two axes with update of current position on Bright Field image
End of explanation
DPmax = np.max(im1.reshape((im1.shape[0]*im1.shape[1],im1.shape[2],im1.shape[3])),axis=0)
#Plot the image
fg2,ax2 = plt.subplots(1,1)
ax2.imshow(np.sqrt(DPmax))
ax2.set(title='Maximum intensity for each detector pixel (sqrt)');
Explanation: Find the maximum intensity for every pixel in the diffraction pattern
Useful to see features close to the noise floor
End of explanation |
14,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step7: Implementation
Step8: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer
Step9: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step10: Implementation
Step11: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step12: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer
Step13: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step14: Implementation
Step15: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step16: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = []
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = None
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = (None, None, None, None)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = None
# TODO: Report the score of the prediction using the testing set
score = None
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer:
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer:
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = None
# TODO: Scale the sample data using the natural logarithm
log_samples = None
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer:
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log.
End of explanation
# Display the log-transformed sample data
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = None
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = None
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = None
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = []
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = None
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = None
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer:
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
Answer:
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = None
# TODO: Transform the good data using the PCA fit above
reduced_data = None
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = None
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = None
# TODO: Predict the cluster for each data point
preds = None
# TODO: Find the cluster centers
centers = None
# TODO: Predict the cluster for each transformed sample data point
sample_preds = None
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = None
Explanation: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# TODO: Inverse transform the centers
log_centers = None
# TODO: Exponentiate the centers
true_centers = None
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
Answer:
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, outliers, pca_samples)
Explanation: Answer:
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
14,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sharing internet on the laocal machine
ifconfig
scanning the local network to find the raspberry π
nmap -T4 -sP 192.168.2.0/24
what I used to clone the repository on the π
686 pip install pyserial
687 vim .ssh/id_rsa.pub
688 vim .ssh/id_rsa
689 chmod 0600 .ssh/id_rsa
690 git clone [email protected]
Step1: on vérifie que ça marche pour toute une série de positions désirées et qu'on évite le demi tour
Step2: on vérifie aussi que ça marche pour toute une série de positions initiales et qu'on évite le demi tour
Step3: arduino
coordonnées moteur + engrenage
Step4: Donc, on a un nombre même pas entier mais c'est pas grave. C'est le nombre de pas à faire pour le moteur pour faire faire un tour à la lame. Dans un premier temps, on va considérer ce cercle comme des entiers entre 0 et n_pas. ça devrait pouvoir être envoyé sans probleme a chaque arduino comme un int16 (à vérifier - si on a de la marge, on peut faire un truc plus précis)
Step5: L'algo sur le arduino doit donc avoir cette partie pour convertir une donnée absolue reçue en donnée relative à envoyer aux moteurs
Step6: git | Python Code:
nb_pas = 12
position_present = 6
position_desired = 4
d_position = (position_desired - position_present + nb_pas//2 ) % nb_pas - nb_pas//2
print (d_position)
position_present = (position_present + d_position ) % nb_pas
print (position_present)
Explanation: sharing internet on the laocal machine
ifconfig
scanning the local network to find the raspberry π
nmap -T4 -sP 192.168.2.0/24
what I used to clone the repository on the π
686 pip install pyserial
687 vim .ssh/id_rsa.pub
688 vim .ssh/id_rsa
689 chmod 0600 .ssh/id_rsa
690 git clone [email protected]:laurentperrinet/elasticte.git
691 cd elasticte/
692 ls
693 cat scenario_line_contraint.py
694 python scenario_line_contraint.py serial
695 pip install -e .
696 pip install --user -e .
697 python scenario_line_contraint.py serial
starting
````
⇒ ssh [email protected]
Linux pielastic 3.18.7-v7+ #755 SMP PREEMPT Thu Feb 12 17:20:48 GMT 2015 armv7l
pi@pielastic ~ $ cd elasticte/
pi@pielastic ~/elasticte $ git pull # to update the code
Already up-to-date.
pi@pielastic ~/elasticte $ python scenario_line_contraint.py serial
````
coordonées absolues
Prenons un cas simple avec 12 pas possible et calculons en fonction de la position actuelle et de la position désirée la commande à envoyer aux moteurs
End of explanation
nb_pas = 12
position_present = 2
position_desired = np.arange(nb_pas)
d_position = (position_desired - position_present + nb_pas//2 ) % nb_pas - nb_pas//2
print (d_position)
position_present = (position_present + d_position ) % nb_pas
print (position_present)
Explanation: on vérifie que ça marche pour toute une série de positions désirées et qu'on évite le demi tour:
End of explanation
nb_pas = 12
position_present = np.arange(nb_pas)
position_desired = 8
d_position = (position_desired - position_present + nb_pas//2 ) % nb_pas - nb_pas//2
print (d_position)
position_present = (position_present + d_position ) % nb_pas
print (position_present)
Explanation: on vérifie aussi que ça marche pour toute une série de positions initiales et qu'on évite le demi tour:
End of explanation
n_pas = 200. * 32. * 60 / 14
print(n_pas)
Explanation: arduino
coordonnées moteur + engrenage:
- 1.8 deg par pas (soit 200 pas par tour) x 32 divisions de pas =
- demultiplication : pignon1= 14 dents, pignon2 = 60 dents
End of explanation
2**16
Explanation: Donc, on a un nombre même pas entier mais c'est pas grave. C'est le nombre de pas à faire pour le moteur pour faire faire un tour à la lame. Dans un premier temps, on va considérer ce cercle comme des entiers entre 0 et n_pas. ça devrait pouvoir être envoyé sans probleme a chaque arduino comme un int16 (à vérifier - si on a de la marge, on peut faire un truc plus précis):
End of explanation
nb_pas = 27428
position_present = 27424
position_desired = 4 # recu avec la lecture sur le port série
# quand on reçoit une valeur absolue on calcule
d_position = (position_desired - position_present + nb_pas//2 ) % nb_pas - nb_pas//2
# on envoie d_position aux moteurs
# on met à jour la position du moteur
position_present = (position_present + d_position ) % nb_pas
print(d_position, position_present)
Explanation: L'algo sur le arduino doit donc avoir cette partie pour convertir une donnée absolue reçue en donnée relative à envoyer aux moteurs:
End of explanation
!git s
#!git add 2015-10-27\ élasticité\ r*
!git commit -am' on connecte le π + coordonnées absolues'
! git push
Explanation: git
End of explanation |
14,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4. External Quantification Tools
Step1: LCModel
LCModel requires several different files in order to process a spectrum. The actual MRS data is stored in the time domain in a .RAW file, with an optional .H20 file containing the water reference data (these are actually the same format but the extension helps in distinguishing the files). A .CONTROL file is what the LCModel program actually receives as input, it lists the input and output files, and any parameters which have non-default values. Finally the metabolite basis set used for fitting the data is contained in a .BASIS file. The .BASIS file is normally provided with LCModel and unlike the other files does not need to be changed with each new dataset.
Suspect can generate all the files necessary to process data with LCModel using the write_all_files() function from the suspect.io.lcmodel module. This function takes a path to which to save the .RAW file (any other files will be saved in the same directory), the MRSData object to be written, and an optional params dictionary to customise any of the values in the .CONTROL file. This dictionary can contain any of the parameter names from the LCModel manual. Let's take a look at an example of how to save our data for LCModel to process.
Step2: We can use some IPython magic to show the files that were created
Step3: and to look at the contents of the .CONTROL file
Step4: The .CONTROL file contains all the parameters necessary for LCModel to process the file, including the path we specified to the correct basis set. Once the .RAW and .CONTROL files are generated, it only remains to run the LCModel program on the command line, passing in the .CONTROL file like this | Python Code:
import suspect
import numpy as np
from matplotlib import pyplot as plt
%matplotlib nbagg
data = suspect.io.load_rda("/home/jovyan/suspect/tests/test_data/siemens/SVS_30.rda")
Explanation: 4. External Quantification Tools
End of explanation
# create a parameters dictionary to set the basis set to use
params = {
"FILBAS": "/path/to/lcmodel/basis.BASIS"
}
suspect.io.lcmodel.write_all_files("lcmodel_data/example.RAW", data, params=params)
Explanation: LCModel
LCModel requires several different files in order to process a spectrum. The actual MRS data is stored in the time domain in a .RAW file, with an optional .H20 file containing the water reference data (these are actually the same format but the extension helps in distinguishing the files). A .CONTROL file is what the LCModel program actually receives as input, it lists the input and output files, and any parameters which have non-default values. Finally the metabolite basis set used for fitting the data is contained in a .BASIS file. The .BASIS file is normally provided with LCModel and unlike the other files does not need to be changed with each new dataset.
Suspect can generate all the files necessary to process data with LCModel using the write_all_files() function from the suspect.io.lcmodel module. This function takes a path to which to save the .RAW file (any other files will be saved in the same directory), the MRSData object to be written, and an optional params dictionary to customise any of the values in the .CONTROL file. This dictionary can contain any of the parameter names from the LCModel manual. Let's take a look at an example of how to save our data for LCModel to process.
End of explanation
!ls lcmodel_data/
Explanation: We can use some IPython magic to show the files that were created:
End of explanation
!cat lcmodel_data/example_sl0.CONTROL
Explanation: and to look at the contents of the .CONTROL file:
End of explanation
# create a parameters dictionary to set the basis set to use
params = {
"FILBAS": "/path/to/lcmodel/basis.BASIS",
"LCSV": True
}
suspect.io.lcmodel.write_all_files("lcmodel_data/example.RAW", data, params=params)
!cat lcmodel_data/example_sl0.CONTROL
Explanation: The .CONTROL file contains all the parameters necessary for LCModel to process the file, including the path we specified to the correct basis set. Once the .RAW and .CONTROL files are generated, it only remains to run the LCModel program on the command line, passing in the .CONTROL file like this:
lcmodel < lcmodel_data/example_sl0.CONTROL
If you are running your Suspect code on the same computer as your LCModel installation, so that lcmodel is in your path, it is trivial to run it from within a Jupyter notebook using IPython magic, or from a script using the subprocess module. However if you are running Suspect on a different computer, for example using the OpenMRSLab Docker container, then you will have to transfer the LCModel files to the LCModel computer for processing. This can be inconvenient because the paths in the generated .CONTROL file can become out of date. In our lab we have solved this problem by using a shared network drive which is mounted by both machines, then the paths remain consistent and the lcmodel program can be launched remotely from the Docker container with a simple ssh command. Another alternative is to use the suspect.io.lcmodel.save_raw() function instead to save only the .RAW file which can then be loaded by LCMGUI and the other parameters configured there.
By default the control file is set to only generate the standard 1 page .PS output. The other output files can be generated by setting the appropriate options in the params dictionary. For example, to generate the .CSV file, set the "LCSV" parameter to True. In this case, Suspect will automatically generate the path for the .CSV in the same folder as the .RAW and .CONTROL files. To set a custom location to save the .CSV, instead set the parameter "FILCSV" in params.
End of explanation |
14,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with events
This tutorial describes event representation and how event arrays are used to
subselect data.
Step1: The tutorial tut-events-vs-annotations describes in detail the
different ways of obtaining an
Step2: Reading and writing events from/to a file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Event arrays are
Step3: When writing event arrays to disk, the format will be inferred from the file
extension you provide. By convention, MNE-Python expects events files to
either have an
Step4: If some of those events are not of interest, you can easily subselect events
using
Step5: It is also possible to combine two Event IDs using
Step6: Note, however, that merging events is not necessary if you simply want to
pool trial types for analysis; the next section describes how MNE-Python uses
event dictionaries to map integer Event IDs to more descriptive label
strings.
Mapping Event IDs to trial descriptors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So far in this tutorial we've only been dealing with integer Event IDs, which
were assigned based on DC voltage pulse magnitude (which is ultimately
determined by the experimenter's choices about what signals to send to the
STIM channels). Keeping track of which Event ID corresponds to which
experimental condition can be cumbersome, and it is often desirable to pool
experimental conditions during analysis. You may recall that the mapping of
integer Event IDs to meaningful descriptions for the sample dataset
<sample-dataset> is given in this table
<sample-data-event-dict-table> in the introductory tutorial
<tut-overview>. Here we simply reproduce that mapping as an
event dictionary
Step7: Event dictionaries like this one are used when extracting epochs from
continuous data, and the resulting
Step8: Plotting events and raw data together
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Events can also be plotted alongside the
Step9: Making equally-spaced Events arrays
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For some experiments (such as those intending to analyze resting-state
activity) there may not be any experimental events included in the raw
recording. In such cases, an Events array of equally-spaced events can be
generated using | Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
Explanation: Working with events
This tutorial describes event representation and how event arrays are used to
subselect data.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping the :class:~mne.io.Raw
object to just 60 seconds before loading it into RAM to save memory:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
Explanation: The tutorial tut-events-vs-annotations describes in detail the
different ways of obtaining an :term:Events array <events> from a
:class:~mne.io.Raw object (see the section
overview-tut-events-section for details). Since the sample
dataset <sample-dataset> includes experimental events recorded on
:term:STIM channel STI 014, we'll start this tutorial by parsing the
events from that channel using :func:mne.find_events:
End of explanation
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw-eve.fif')
events_from_file = mne.read_events(sample_data_events_file)
assert np.array_equal(events, events_from_file[:len(events)])
Explanation: Reading and writing events from/to a file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Event arrays are :class:NumPy array <numpy.ndarray> objects, so they could
be saved to disk as binary :file:.npy files using :func:numpy.save.
However, MNE-Python provides convenience functions :func:mne.read_events
and :func:mne.write_events for reading and writing event arrays as either
text files (common file extensions are :file:.eve, :file:.lst, and
:file:.txt) or binary :file:.fif files. The example dataset includes the
results of mne.find_events(raw) in a :file:.fif file. Since we've
truncated our :class:~mne.io.Raw object, it will have fewer events than the
events file loaded from disk (which contains events for the entire
recording), but the events should match for the first 60 seconds anyway:
End of explanation
mne.find_events(raw, stim_channel='STI 014')
Explanation: When writing event arrays to disk, the format will be inferred from the file
extension you provide. By convention, MNE-Python expects events files to
either have an :file:.eve extension or to have a file basename ending in
-eve or _eve (e.g., :file:{my_experiment}_eve.fif), and will issue
a warning if this convention is not respected.
Subselecting and combining events
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The output of :func:~mne.find_events above (repeated here) told us the
number of events that were found, and the unique integer event IDs present:
End of explanation
events_no_button = mne.pick_events(events, exclude=32)
Explanation: If some of those events are not of interest, you can easily subselect events
using :func:mne.pick_events, which has parameters include and
exclude. For example, in the sample data Event ID 32 corresponds to a
subject button press, which could be excluded as:
End of explanation
merged_events = mne.merge_events(events, [1, 2, 3], 1)
print(np.unique(merged_events[:, -1]))
Explanation: It is also possible to combine two Event IDs using :func:mne.merge_events;
the following example will combine Event IDs 1, 2 and 3 into a single event
labelled 1:
End of explanation
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
Explanation: Note, however, that merging events is not necessary if you simply want to
pool trial types for analysis; the next section describes how MNE-Python uses
event dictionaries to map integer Event IDs to more descriptive label
strings.
Mapping Event IDs to trial descriptors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So far in this tutorial we've only been dealing with integer Event IDs, which
were assigned based on DC voltage pulse magnitude (which is ultimately
determined by the experimenter's choices about what signals to send to the
STIM channels). Keeping track of which Event ID corresponds to which
experimental condition can be cumbersome, and it is often desirable to pool
experimental conditions during analysis. You may recall that the mapping of
integer Event IDs to meaningful descriptions for the sample dataset
<sample-dataset> is given in this table
<sample-data-event-dict-table> in the introductory tutorial
<tut-overview>. Here we simply reproduce that mapping as an
event dictionary:
End of explanation
fig = mne.viz.plot_events(events, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp, event_id=event_dict)
fig.subplots_adjust(right=0.7) # make room for legend
Explanation: Event dictionaries like this one are used when extracting epochs from
continuous data, and the resulting :class:~mne.Epochs object allows pooling
by requesting partial trial descriptors. For example, if we wanted to pool
all auditory trials, instead of merging Event IDs 1 and 2 using the
:func:~mne.merge_events function, we can make use of the fact that the keys
of event_dict contain multiple trial descriptors separated by /
characters: requesting 'auditory' trials will select all epochs with
Event IDs 1 and 2; requesting 'left' trials will select all epochs with
Event IDs 1 and 3. An example of this is shown in a later tutorial.
.. TODO replace above sentence when the relevant tut is ready:
An example of this is shown later, in the epoch-pooling section of
the epochs-intro-tutorial tutorial.
Plotting events
^^^^^^^^^^^^^^^
Another use of event dictionaries is when plotting events, which can serve as
a useful check that your event signals were properly sent to the STIM
channel(s) and that MNE-Python has successfully found them. The function
:func:mne.viz.plot_events will plot each event versus its sample number
(or, if you provide the sampling frequency, it will plot them versus time in
seconds). It can also account for the offset between sample number and sample
index in Neuromag systems, with the first_samp parameter. If an event
dictionary is provided, it will be used to generate a legend:
End of explanation
raw.plot(events=events, start=5, duration=10, color='gray',
event_color={1: 'r', 2: 'g', 3: 'b', 4: 'm', 5: 'y', 32: 'k'})
Explanation: Plotting events and raw data together
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Events can also be plotted alongside the :class:~mne.io.Raw object they
were extracted from, by passing the Event array as the events parameter
of :meth:raw.plot <mne.io.Raw.plot>:
End of explanation
new_events = mne.make_fixed_length_events(raw, start=5, stop=50, duration=2.)
Explanation: Making equally-spaced Events arrays
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For some experiments (such as those intending to analyze resting-state
activity) there may not be any experimental events included in the raw
recording. In such cases, an Events array of equally-spaced events can be
generated using :func:mne.make_fixed_length_events:
End of explanation |
14,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Revenue Prediction for Site Selection
Whether it’s expansion, consolidation or performance monitoring, understanding revenue drivers is essential for Site Planning in many sectors such as Retail or Restaurant and Food Services.
This notebook walks you through all the data collection and preparation steps required for building a revenue prediction model. The main steps followed are
Step1: In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
Step2: 1. Load data
We'll start loading the CSV file containing all stores in Madrid with their address and annual revenue. They 57 stores in total.
Step3: 2. Processing data. Geocoding
We have the address of each store, but we need their coordinates in order to perform spatial analysis. We'll use CARTOframes geocoding functionality for this.
Step4: 3. Spatial Data Analysis
Once we have the stores geocoded, we'll analyze the spatial distribution of annual sdales.
Step5: 3. Enrichment
Sales in a grocery store is mainly influenced by
Step6: 3.2 Enrichment
Once we have the area of influence of every store, we can enrich our initial data.
Step7: 3.2.1 POIs
We'll start by enriching with POI data. We'll calculate for every store
Step8: Number of POIs
We'll calculate the number of POIs within each store's catchment area. This gives us an idea of how busy the area is.
Step9: Number of competitors
Next, we'll count the number of competitors within the 10 minute isochrone. This represents all the supermarkets that customers within our stores can find within a 5-minute-walk distance.
Note we apply a filter to only count grocery stores (competitors).
Step10: 3.2.2 Sociodemographic and socioeconomic data
Now we'll enrich our dataframe with Unica360 Sociodemographics premium dataset.
For more details on how to discover a dataset, please check <a href='#example-data-discovery-in-the-data-observatory' target='_blank'>this notebook</a> or take a look at our Guides.
Step11: Explore and identify the variables of interest
We can get a detailed description of every variable.
Step12: We decide to enrich our dataframe with the following variables.
Step13: 3.2.3 Spatial lag variables
In this section, we'll calculate the following spatial lag variables
Step14: 3.2.3.2 Distance to closest Carrefour Express
Step15: 3.3 Visualize enrichment | Python Code:
import geopandas as gpd
import ipywidgets as widgets
import numpy as np
import pandas as pd
import pyproj
from cartoframes.auth import set_default_credentials
from cartoframes.data.observatory import *
from cartoframes.data.services import Geocoding, Isolines
from cartoframes.viz import *
from IPython.display import clear_output, display
from scipy.spatial.distance import cdist
from shapely import wkt
from shapely.geometry import Point
pd.set_option('display.max_columns', 100)
Explanation: Revenue Prediction for Site Selection
Whether it’s expansion, consolidation or performance monitoring, understanding revenue drivers is essential for Site Planning in many sectors such as Retail or Restaurant and Food Services.
This notebook walks you through all the data collection and preparation steps required for building a revenue prediction model. The main steps followed are:
1. Processing data. Geocoding
2. Spatial analysis of client's data
3. Enrichment
3.1 Calculate isochrones
3.2 Enrich isochrones
Modeling hints
We'll use CARTOframes throughout the analysis.
Note this use case leverages premium datasets from CARTO's Data Observatory.
Use case description
In order to show all the steps and functionality, we'll work with simulated sales data of Carrefour Express, a chain of small-sized supermarkets.
Carreforu Express (CE) wants to reorganize (open/close) their stores in the city of Madrid (Spain). In order to define an optimal plan of openings and closures, they first need to understand why some stores are performing better (in terms of annual revenue) than others, and identify areas where they could have a high performance.
They have provided us with the stores they have in the city of Madrid, together with the average annual sales of the last three years.
Note the annual sales are not Carrefour Express' actual data.
0. Setup
Import the packages we'll use.
End of explanation
set_default_credentials('creds.json')
Explanation: In order to be able to use the Data Observatory via CARTOframes, you need to set your CARTO account credentials first.
Please, visit the Authentication guide for further detail.
End of explanation
stores = pd.read_csv('https://docs.google.com/spreadsheets/d/1RlOsWN3OBTS0Zhq2lbYvBrwXxSJjpCdWrOWHSqg2JVE/export?gid=0&format=csv')
stores.head()
stores.shape
Explanation: 1. Load data
We'll start loading the CSV file containing all stores in Madrid with their address and annual revenue. They 57 stores in total.
End of explanation
stores['complete_address'] = stores.apply(lambda row : f"{row['mainaddressline']}, {row['postcode']}",axis=1)
gc = Geocoding()
gdf, metadata = gc.geocode(stores, street='complete_address', city='areaname2',
state='areaname1', country={'value': 'Spain'}, )
gdf.head(2)
stores = gdf[stores.columns.tolist() + ['the_geom']].rename(columns={'the_geom':'geometry'})
stores.head(2)
Map(Layer(stores,
popup_hover=popup_element('name'),
geom_col='geometry'))
Explanation: 2. Processing data. Geocoding
We have the address of each store, but we need their coordinates in order to perform spatial analysis. We'll use CARTOframes geocoding functionality for this.
End of explanation
stores['name'] = stores['name'].str[18:]
Map(Layer(stores,
style=size_continuous_style('annual_sales'),
popup_hover=[popup_element('name'), popup_element('annual_sales')],
legends=size_continuous_legend('Annual Sales', 'Annual sales in million euros'),
geom_col='geometry'))
Explanation: 3. Spatial Data Analysis
Once we have the stores geocoded, we'll analyze the spatial distribution of annual sdales.
End of explanation
iso_service = Isolines()
isochrones_gdf, _ = iso_service.isochrones(stores, [300, 600], mode='walk', exclusive=False)
isochrones_gdf.head()
Map(Layer(isochrones_gdf, geom_col='the_geom', style=basic_style(opacity=0.3)))
stores['iso_5walk'] = isochrones_gdf.loc[isochrones_gdf['data_range'] == 300, 'the_geom'].values
stores['iso_10walk'] = isochrones_gdf.loc[isochrones_gdf['data_range'] == 600, 'the_geom'].values
Explanation: 3. Enrichment
Sales in a grocery store is mainly influenced by:
- The characteristics of the population who live in the area around the store
- Competitors
- How busy the area around is (residential, touristic, work)
- How many people move around the area
In order to enrich our initial dataset with this information, we first need to define the area of influence (catchment area) of the different stores. Once we have the catchment area, we'll bring all the data related to that area from CARTO's Data Observatory.
3.1 Isochrones
Because of the characteristics of Carrefour Express' customers, we can define their catchment areas by acknowledging that:
- Their customers usually make small to medium purchases
- Their customers live close to the supermarket
Therefore, we'll consider 5-minute-walking isochrones as their area of influence.
Explore our Guides to learn more about isochornes.
End of explanation
enrichment = Enrichment()
Explanation: 3.2 Enrichment
Once we have the area of influence of every store, we can enrich our initial data.
End of explanation
dataset = Dataset.get('pb_points_of_i_94bda91b')
dataset.variables.to_dataframe().head()
Explanation: 3.2.1 POIs
We'll start by enriching with POI data. We'll calculate for every store:
- The number of POIs within the 5-minute-walk isochrone. This will give us a measurement of how commercially busy the area is.
- The number of competitors within the 10-minute-walk isochrone. Note we're taking here 10 minutes because we are interested in knowing all competitors that people living within the 5-minute-walk isochrone can reach in a 5-minute walk.
We will use Pitney Bowes' Points Of Interest premium dataset.
Take a look at <a href='#example-access-premium-data-from-the-data-observatory' target='_blank'>this template</a> for more details on how to access and download a premium dataset.
For more details on how to discover a dataset, please check <a href='#example-data-discovery-in-the-data-observatory' target='_blank'>this notebook</a> or take a look at our Guides.
End of explanation
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='iso_5walk'
)
enriched_dataset_gdf.head()
stores['n_pois'] = enriched_dataset_gdf['CLASS'].values
Map(Layer(stores, geom_col='iso_5walk', style=color_bins_style('n_pois')))
Explanation: Number of POIs
We'll calculate the number of POIs within each store's catchment area. This gives us an idea of how busy the area is.
End of explanation
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=['CLASS_517d6003'],
aggregation='COUNT',
geom_col='iso_10walk',
filters={'carto-do.pitney_bowes.pointsofinterest_pointsofinterest_esp_latlon_v1_monthly_v1.CLASS':
"= 'GROCERY STORES'"}
)
stores['n_competitors'] = enriched_dataset_gdf['CLASS'].values
stores.head(3)
Explanation: Number of competitors
Next, we'll count the number of competitors within the 10 minute isochrone. This represents all the supermarkets that customers within our stores can find within a 5-minute-walk distance.
Note we apply a filter to only count grocery stores (competitors).
End of explanation
dataset = Dataset.get('u360_sociodemogr_28e93b81')
dataset.head()
Explanation: 3.2.2 Sociodemographic and socioeconomic data
Now we'll enrich our dataframe with Unica360 Sociodemographics premium dataset.
For more details on how to discover a dataset, please check <a href='#example-data-discovery-in-the-data-observatory' target='_blank'>this notebook</a> or take a look at our Guides.
End of explanation
Variable.get('C02_01_GASTO_M__7ad08d93').to_dict()
Explanation: Explore and identify the variables of interest
We can get a detailed description of every variable.
End of explanation
vars_enrichment = ['P_T_9be2c6a7',
'P_ED_00_14_M_b66ee9e9', 'P_ED_00_14_H_c6041d66', 'P_ED_15_24_M_5261dc00', 'P_ED_15_24_H_220b288f',
'P_ED_25_44_M_46e29941', 'P_ED_25_44_H_36886dce', 'P_ED_45_64_M_8f3b64f0', 'P_ED_45_64_H_ff51907f',
'P_ED_65_79_M_a8c081ef', 'P_ED_65_79_H_d8aa7560', 'P_ED_80_MAS_M_c1c729f7', 'P_ED_80_MAS_H_b1addd78',
'renta_hab_disp_e4a8896c', 'C02_01_GASTO_M__7ad08d93']
enriched_dataset_gdf = enrichment.enrich_polygons(
stores,
variables=vars_enrichment,
geom_col='iso_5walk'
)
stores = enriched_dataset_gdf
stores.crs = 'epsg:4326'
stores.columns = map(str.lower, stores.columns)
stores.head()
Explanation: We decide to enrich our dataframe with the following variables.
End of explanation
madrid_city_center = Point(-3.703367, 40.416892)
proj_in = pyproj.Proj('epsg:4326')
proj_out = pyproj.Proj('epsg:25830')
project = pyproj.Transformer.from_proj(proj_in, proj_out).transform
stores['dist_cc'] = stores.set_geometry('geometry').to_crs('epsg:25830').distance(
Point(project(madrid_city_center.y, madrid_city_center.x))).values
stores.head(2)
Explanation: 3.2.3 Spatial lag variables
In this section, we'll calculate the following spatial lag variables:
- Distance Madrid city center (Puerta del Sol)
In the city of Madrid, all touristic places are close to the Puerta del Sol site. This variable measures how close the store is to touristic places.
- Distance to the closest Carrefour Express
Other interesting spatial lag variables would be the average distance to the 3 closest competitors or the average revenue of the 2 closest Carrefour Express stores, just to mention some extra examples.
3.2.3.1 Distance to Puerta del Sol
End of explanation
dist_array = cdist(stores.set_geometry('geometry').to_crs('epsg:25830').geometry.apply(lambda point:[point.x, point.y]).tolist(),
stores.set_geometry('geometry').to_crs('epsg:25830').geometry.apply(lambda point:[point.x, point.y]).tolist())
stores['distance_closest_ce'] = list(map(lambda dist_a:np.max(np.partition(dist_a, 2)[:2]), dist_array))
Explanation: 3.2.3.2 Distance to closest Carrefour Express
End of explanation
stores.head()
Map(Layer(stores,
geom_col='iso_5walk',
style=color_bins_style('n_competitors'),
legends=color_bins_legend('# Competitors', 'competitos within 10-minute driving isochrone'),
popup_hover=[popup_element('name', 'Name'),
popup_element('n_pois', 'Number of POIs'),
popup_element('n_competitors', 'Number of competitors'),
popup_element('p_t', 'Population coverage'),
popup_element('c02_01_gasto_m_alimentacion_m', 'Groceries spending'),
popup_element('renta_hab_disp', 'income'),
popup_element('distance_closest_ce', 'Distance to closest CE')],
widgets=[histogram_widget('n_pois', 'Number of POIs', description='Select a range of values to filter', buckets=10),
histogram_widget('n_competitors', 'Number of competitors', description='Select a range of values to filter', buckets=10),
histogram_widget('dist_cc', 'Distance to city center', description='Select a range of values to filter', buckets=10),
histogram_widget('distance_closest_ce', 'Distance to closest CE store', description='Select a range of values to filter', buckets=10)]))
Explanation: 3.3 Visualize enrichment
End of explanation |
14,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maps for when the living gets tough
Step1: List of blocked reactions in the model
The following reactions were blocked in the original (Nagarajan,2013) model and still blocked in our model, FYI.
Step2: Make sure there is no Gibbs free energy being generated from nothing
Check that in no combination of enzyme alternatives there can be energy production (ATP) from nothing.
By "from nothing" we mean that we allow nothing to enter the cell from the outside (i.e. no glucose uptake).
The table generated should contain only zeros, indicating zero ATP synthesis.
Step3: Visualize the flux distribution when performing acetogenesis
Using https
Step4: Show that the entire Wood-Ljungdahlii pathway and the hydrogenases are needed to produce acetate
To show that the entire Wood-Ljungdahl pathway needs to be present we sequentially knock out one step in the pathway and ask the model to produce acetate from CO2/H2.
If the result is zero, acetate cannot be produced. If it is non-zero acetate may still be synthesized and the considered reaction is not predicted to be essential.
From the resulting table we see that almost all reactions are essential
Step5: Show also that the hydrogenases are essential
Using the same approach as for the WLP, we check whether the hydrogenases are essential for the acetogenesis process.
The ljungdahlii GEMM actually contains 4 hydrogenase reactions. The result table clearly shows that at least one hydrogenase must be present and that acetogenesis is possible with either the HYDFDN2r, HYDFDN or the HYDFDi.
Step6: Reproduce all 6 situations from Fig. 3 of (Schuchmann, 2014)
Here we consider a similar analysis (but performed with FBA on the metabolic map instead of on paper on a small reaction network) as in Figure 3 in the publication by Schuchmann and Müller.
We extended the considered scenarios of Figure 3 by also considering alternatives in the MTHFD and MTHFR reactions since Schuchmann and Müller consider different versions of these enzymes than the model by Nagarajan et al.
In brackets we show the flux through the Nfn complex to show its contribution to each flux pattern. The set of Nfn fluxes shows that first of all it is not essential and second it may function at different flux levels and in different directions.
Step7: Effect of the redox on BHB yield
Since the enzymes affect ATP yield coupled to acetogenesis, do they also affect product yield? Here we perform the same simulations as above but using beta-hydroxybutyrate synthesis as the objective function | Python Code:
import cobra
import pandas as pd
pd.set_option('display.max_colwidth', -1)
import re
import traceback
import escher
# import local functions
from show_map import show_map
# Load our modified Nagarajan et al., 2013 model
escher_file = '../Data/Escher/escher_map_c_ljungdahlii_acetogenesis_2020.json'
M = cobra.io.read_sbml_model("../Data/models/c_ljungdahlii_nagarajan_2013_update.xml")
Explanation: Maps for when the living gets tough: Maneuvering through a hostile energy landscape
Thierry D.G.A Mondeel, Samrina Rehman, Yanfei Zhang, Malkhey Verma, Peter Dürre, Matteo Barberis and Hans V. Westerhoff
Notebook by: Thierry Mondeel
This jupyter notebook (http://jupyter.org/) contains all code (and a bit extra) to reproduce the analysis in the conference paper for http://www.fosbe2016.ovgu.de/. Using http://mybinder.org/ this notebook is available and executable in the cloud.
This notebook is our attempt at achieving computational reproducibility: https://doi.org/10.1371/journal.pcbi.1003285.
The story depends heavily on
- the publication of a genome-wide metabolic map of Clostridium ljungdahlii (Nagarajan,2013) https://doi.org/10.1186/1475-2859-12-118
- the review paper by Schuchmann and Müller https://doi.org/10.1038/nrmicro3365
Abstract
With genome sequencing of thousands of organisms, a scaffold has become available for data integration: molecular information can now be organized by attaching it to the genes and their gene-expression products. It is however, the genome that is selfish not the gene, making it necessary to organize the information into maps that enable functional interpretation of the fitness of the genome. Using flux balance analysis one can calculate the theoretical capabilities of the living organism. Here we examine whether according to this genome organized information, organisms such as the ones present when life on Earth began, are able to assimilate the Gibbs energy and carbon that life needs for its reproduction and maintenance, from a relatively poor Gibbs-energy environment. We shall address how Clostridium ljungdahlii may use at least two special features and one special pathway to this end: gear-shifting, electron bifurcation and the Wood-Ljungdahl pathway. Additionally, we examined whether the C. ljungdahlii map can also help solve the problem of waste management. We find that there is a definite effect of the choices of redox equivalents in the Wood-Ljungdahl pathway and the hydrogenase on the yield of interesting products like hydroxybutyrate. We provide a drawing of a subset of the metabolic network that may be utilized to project flux distributions onto by the community in future works. Furthermore, we make all the code leading to the results discussed here publicly available for the benefit of future work.
Introduction
How to interact with this document
This Jupyter notebook (http://jupyter.org/) contains text cells (like this one), input and output cells. Input cells will contain code to do something in Python and (usually) perform a simulation. The output cells contain the result, i.e. a table or an image.
The idea of this document is that you can:
- Check the code by reading it
- Run the code by going to "cell > run all" or by selecting one input cell and clicking the ">|" button in the toolbar at the top of the screen.
- See the results (in the output cells
- Edit the code (if you know how to write code) and see the results of your changes
General comments
In this work, we aim to compare the analysis by Schuchmann and Müller with the predictions of maximal ATP synthesis emanating from the genome-wide metabolic reconstruction of the model acetogen Clostridium ljungdahlii through flux-balance analysis (FBA) (Orth, 2010). Some analysis on the effect of redox equivalents on growth and product synthesis was already present in (Nagarajan, 2013). Specifically, it was shown that the genome-wide map predicts the possibility of growth on CO2/H2 and CO and the effect of various options in redox equivalents were analyzed under the knockout of acetate kinase. However, we hope to extend that analysis here by including various alternative reactions that were considered in the treatment by Schuchmann and Müller. Specifically, we will investigate alternatives in the electron donors/acceptors for various enzymes and their effect on ATP yield coupled to acetogenesis. Additionally, we will focus on the importance of the Wood-Ljungdahl pathway as opposed to single enzymes, the need for electron bifurcation and the Nfn complex, the concept of gear-shifting, the requirement of low gear and advantages of high gear operation, and how much product yield might be attained when engineering C. Ljungdahlii with two additional genes for producing poly-hydroxybutyrate (PHB) under various redox alterations.
We started from the model from (Nagarajan,2013) downloaded from http://bigg.ucsd.edu/models/iHN637
Then we simply added the reactions considered in Schuchmann and Müller that were not present yet in the model.
In all simulations unless stated otherwise the objective is the ATP maintenance reactions, we allow CO2/H2 uptake in a (2/4) ratio and the output flux of acetate is forced to be 1.
On flux balance analysis
For all simulation we use flux balance analysis (Orth,2010) with help from COBRApy (Ebrahim, 2013).
Briefly, this technique concerns the following linear programming problem:
maximize or
$$\text{minimize } Z=c^T v, \text{such that for all } k:$$
$$Sv=0$$
$$\alpha_k \leq v_k \leq \beta_k$$
where $S$ is the stoichiometric matrix for the metabolites, $v$ is the vector of fluxes through all reactions including exchange reactions with the environment of the system considered, $c$ is a vector of weights generating the linear combination of fluxes that make up the objective function $Z$ and $\alpha$ and $\beta$ are the vectors of lower and upper bounds on these fluxes.
A flux distribution returned by FBA is therefore such that all metabolites are produced and consumed in equal amounts, the flux boundaries are accommodated and the flux distribution maximizes (or minimizes) a linear combination of fluxes in the model.
Set up the python environment
Nothing interesting here, just execute this. This cell of code loads the required python modules and the updated (nagarajan,2013) model.
End of explanation
model = M.copy()
d = {}
for rxn in model.reactions:
if rxn.lower_bound == 0 and rxn.upper_bound == 0:
d[rxn.id] = [rxn.name,rxn.reaction]
df = pd.DataFrame.from_dict(d)
df = df.transpose()
df.columns = ['Name','Reaction']
df
Explanation: List of blocked reactions in the model
The following reactions were blocked in the original (Nagarajan,2013) model and still blocked in our model, FYI.
End of explanation
results = [[],[],[]] # this will be filled with real results
for MTHFR in range(2):
for MTHFD in range(2):
for FDH in range(3):
for HYD in range(2):
pfbaSol = []
model = M.copy()
# remove carbon and H2 from medium
model.reactions.EX_co2_e.lower_bound = 0
model.reactions.EX_h2_e.lower_bound = 0
model.reactions.EX_ac_e.lower_bound = 0 # do not force acetate flux
if FDH == 0:
pass # keep the default FDH7
elif FDH == 1:
model.reactions.FDH7.lower_bound = 0; model.reactions.FDH7.upper_bound = 0;
model.reactions.FDHH2.lower_bound = 0; model.reactions.FDHH2.upper_bound = 0
model.reactions.FDHFDNADPH.lower_bound = -1000; model.reactions.FDHFDNADPH.upper_bound = 1000
else:
model.reactions.FDH7.lower_bound = 0; model.reactions.FDH7.upper_bound = 0;
model.reactions.FDHH2.lower_bound = -1000; model.reactions.FDHH2.upper_bound = 1000
model.reactions.FDHFDNADPH.lower_bound = 0; model.reactions.FDHFDNADPH.upper_bound = 0
if HYD == 0:
model.reactions.HYDFDN.lower_bound = -1000; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0 # the Fd + NADPH hydrogenase
else:
model.reactions.HYDFDN.lower_bound = 0; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = -1000; model.reactions.HYDFDN2r.upper_bound = 1000 # the Fd + NADPH hydrogenase
if MTHFD == 0:
model.reactions.MTHFD.upper_bound = 0; model.reactions.MTHFD_alt.upper_bound = 1000
else:
model.reactions.MTHFD.upper_bound = 1000; model.reactions.MTHFD_alt.upper_bound = 0
if MTHFR == 0:
model.reactions.MTHFR5.upper_bound = 0; model.reactions.MTHFR5_alt.upper_bound = 1000
else:
model.reactions.MTHFR5.upper_bound = 1000; model.reactions.MTHFR5_alt.upper_bound = 0
try:
pfbaSol = cobra.flux_analysis.parsimonious.optimize_minimal_flux(model)
results[FDH].append(str(round(abs(pfbaSol.objective_value),3)) + ' (' + str(round(pfbaSol.fluxes['FRNDPR2r_1'],2)) + ')' )
except:
traceback.print_exc()
results[FDH].append('NP')
iterables = [['NADH', '2 NADH + Fd'], ['NADH', 'NADPH'],['Fd + NADH','Fd + NADPH']]
index = pd.MultiIndex.from_product(iterables, names=['MTHFR', 'MTHFD','FDH \ HYD'])
pd.DataFrame(results,columns=index,index=['Fd','FD+NADPH','H2'])
Explanation: Make sure there is no Gibbs free energy being generated from nothing
Check that in no combination of enzyme alternatives there can be energy production (ATP) from nothing.
By "from nothing" we mean that we allow nothing to enter the cell from the outside (i.e. no glucose uptake).
The table generated should contain only zeros, indicating zero ATP synthesis.
End of explanation
model = M.copy()
# using the Schuchmann WLP
model.reactions.MTHFD.lower_bound,model.reactions.MTHFD.upper_bound = (0,0)
model.reactions.MTHFR5.lower_bound,model.reactions.MTHFR5.upper_bound = (0,0)
model.reactions.MTHFD_alt.lower_bound,model.reactions.MTHFD_alt.upper_bound = (-1000,1000)
model.reactions.MTHFR5_alt.lower_bound,model.reactions.MTHFR5_alt.upper_bound = (-1000,1000)
# the bad hydrogenase
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0
model.reactions.HYDFDN.lower_bound = -1000; model.reactions.HYDFDN.upper_bound = 1000
model.objective = model.reactions.ATPM
model.reactions.ATPM.lower_bound = 0
model.reactions.EX_ac_e.lower_bound = 1
model.reactions.EX_co2_e.lower_bound = -2
model.reactions.EX_h2_e.lower_bound = -4
pfbaSol = cobra.flux_analysis.parsimonious.optimize_minimal_flux(model)
print('ATP flux coupled to acetogenesis:',pfbaSol.fluxes['ATPM'])
b = show_map(pfbaSol, escher_file)
b.save_html('../Figures/acetate_CO2+H2_schuchmann_FDH7.html',overwrite=True)
b.display_in_notebook()
Explanation: Visualize the flux distribution when performing acetogenesis
Using https://escher.github.io/ developed by Zak King we developed a focused network drawing of the C. ljungdahlii genome-wide metabolic map containing a subset of the reactions of interest here. This allows us to visualize the predicted flux distributions.
For simplicity we only draw: glycolysis, the Wood-Ljungdahl pathway, the branched TCA cycle, the synthesis pathways of acetate and butanol and the main exchange reactions (input/output) of interest.
Here we visualize the case where we allow uptake of CO2 and H2 (at 2 and 4 units respectively) and force 1 unit of acetate to be produced. We define the metabolic network to have the exact reactions Schuchmann and Müller considered in the first cell of the table in Figure 3: the Fd dependent formate dehydrogenase together with the NADH dependent hydrogenase.
The objective function is set to the ATP maintenance reaction to predict the maximal amount of ATP that may be coupled to this acetogenesis process. According to Schuchmann and Müller this should return 0 for this network configuration.
End of explanation
list_of_knockouts = ['FDH7','FTHFLi','MTHFC',\
'MTHFD','MTHFR5','METR','CODH_ACS']
results = {}
for rxn in list_of_knockouts:
model = M.copy()
model.reactions.EX_ac_e.lower_bound = 0
model.objective = model.reactions.EX_ac_e
model.reactions.get_by_id(rxn).lower_bound = 0; model.reactions.get_by_id(rxn).upper_bound = 0
results[rxn] = round(abs(model.optimize().objective_value),4)
pd.DataFrame(results.values(),index=results.keys(),columns=['Acetate flux'])
Explanation: Show that the entire Wood-Ljungdahlii pathway and the hydrogenases are needed to produce acetate
To show that the entire Wood-Ljungdahl pathway needs to be present we sequentially knock out one step in the pathway and ask the model to produce acetate from CO2/H2.
If the result is zero, acetate cannot be produced. If it is non-zero acetate may still be synthesized and the considered reaction is not predicted to be essential.
From the resulting table we see that almost all reactions are essential: the exception is the formate dehydrogenase since it may be generated from pyruvate (see the network drawing)
End of explanation
model = M.copy()
model.reactions.EX_ac_e.lower_bound = 0
model.objective = model.reactions.EX_ac_e
results = {}
# no hydrogenases
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0;
model.reactions.HYD2.lower_bound = 0; model.reactions.HYD2.upper_bound = 0
results['No hydrogenase'] = round(abs(model.optimize().objective_value),4)
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0;
results['Only HYD2'] = round(abs(model.optimize().objective_value),4)
model.reactions.HYD2.lower_bound = 0; model.reactions.HYD2.upper_bound = 0;
model.reactions.HYDFDN.lower_bound = -1000; model.reactions.HYDFDN2r.upper_bound = 0;
results['only HYDFDN'] = round(abs(model.optimize().objective_value),4)
model.reactions.HYD2.lower_bound = 0; model.reactions.HYD2.upper_bound = 0;
model.reactions.HYDFDN.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0;
model.reactions.HYDFDN2r.lower_bound = -1000; model.reactions.HYDFDN2r.upper_bound = 1000;
results['only HYDFDN2r'] = round(abs(model.optimize().objective_value),4)
model.reactions.HYD2.lower_bound = 0; model.reactions.HYD2.upper_bound = 0;
model.reactions.HYDFDN.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0;
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0;
model.reactions.HYDFDi.lower_bound = -1000; model.reactions.HYDFDi.upper_bound = 0;
results['only HYDFDi'] = round(abs(model.optimize().objective_value),4)
pd.DataFrame(results.values(),index=results.keys(),columns=['Acetate flux'])
Explanation: Show also that the hydrogenases are essential
Using the same approach as for the WLP, we check whether the hydrogenases are essential for the acetogenesis process.
The ljungdahlii GEMM actually contains 4 hydrogenase reactions. The result table clearly shows that at least one hydrogenase must be present and that acetogenesis is possible with either the HYDFDN2r, HYDFDN or the HYDFDi.
End of explanation
results = [[],[],[]] # this will be filled with real results
for MTHFR in range(2):
for MTHFD in range(2):
for FDH in range(3):
for HYD in range(2):
pfbaSol = []
model = M.copy()
if FDH == 0:
pass # keep the default FDH7
elif FDH == 1:
model.reactions.FDH7.upper_bound = 0; model.reactions.FDH7.lower_bound = 0;
model.reactions.FDHH2.lower_bound = 0; model.reactions.FDHH2.upper_bound = 0
model.reactions.FDHFDNADPH.lower_bound = -1000; model.reactions.FDHFDNADPH.upper_bound = 1000
else:
model.reactions.FDH7.upper_bound = 0; model.reactions.FDH7.lower_bound = 0;
model.reactions.FDHH2.lower_bound = -1000; model.reactions.FDHH2.upper_bound = 1000
model.reactions.FDHFDNADPH.lower_bound = 0; model.reactions.FDHFDNADPH.upper_bound = 0
if HYD == 0:
model.reactions.HYDFDN.lower_bound = -1000; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0 # the Fd + NADPH hydrogenase
else:
model.reactions.HYDFDN.lower_bound = 0; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = -1000; model.reactions.HYDFDN2r.upper_bound = 1000 # the Fd + NADPH hydrogenase
if MTHFD == 0:
model.reactions.MTHFD.upper_bound = 0; model.reactions.MTHFD_alt.upper_bound = 1000
else:
model.reactions.MTHFD.upper_bound = 1000; model.reactions.MTHFD_alt.upper_bound = 0
if MTHFR == 0:
model.reactions.MTHFR5.upper_bound = 0; model.reactions.MTHFR5_alt.upper_bound = 1000
else:
model.reactions.MTHFR5.upper_bound = 1000; model.reactions.MTHFR5_alt.upper_bound = 0
try:
pfbaSol = cobra.flux_analysis.parsimonious.optimize_minimal_flux(model)
results[FDH].append(str(round(abs(pfbaSol.fluxes['ATPM']),3)) + ' (' + str(round(pfbaSol.fluxes['FRNDPR2r_1'],2)) + ')' )
b = show_map(pfbaSol, escher_file)
b.save_html('../Figures/fig_3/FDH='+str(FDH)+'_HYD='+str(HYD)+'_MTHFD='+str(MTHFD)+'_MTHFR='+str(MTHFR)+'.html',overwrite=True)
except:
#traceback.print_exc()
results[FDH].append('NP')
iterables = [['NADH', '2 NADH + Fd'], ['NADH', 'NADPH'],['Fd + NADH','Fd + NADPH']]
index = pd.MultiIndex.from_product(iterables, names=['MTHFR', 'MTHFD','FDH \ HYD'])
pd.DataFrame(results, columns=index,index=['Fd','FD+NADPH','H2'])
Explanation: Reproduce all 6 situations from Fig. 3 of (Schuchmann, 2014)
Here we consider a similar analysis (but performed with FBA on the metabolic map instead of on paper on a small reaction network) as in Figure 3 in the publication by Schuchmann and Müller.
We extended the considered scenarios of Figure 3 by also considering alternatives in the MTHFD and MTHFR reactions since Schuchmann and Müller consider different versions of these enzymes than the model by Nagarajan et al.
In brackets we show the flux through the Nfn complex to show its contribution to each flux pattern. The set of Nfn fluxes shows that first of all it is not essential and second it may function at different flux levels and in different directions.
End of explanation
results = [[],[],[]] # this will be filled with real results
for MTHFR in range(2):
for MTHFD in range(2):
for FDH in range(3):
for HYD in range(2):
pfbaSol = []
model = M.copy()
model.reactions.ATPM.lower_bound = 0
model.reactions.EX_ac_e.lower_bound = 0
model.reactions.EX_co2_e.lower_bound = 0
model.reactions.EX_co_e.lower_bound = -2
model.reactions.EX_h2_e.lower_bound = -4
model.objective = model.reactions.DM_3hbcoa_c
if FDH == 0:
pass # keep the default FDH7
elif FDH == 1:
model.reactions.FDH7.upper_bound = 0; model.reactions.FDH7.lower_bound = 0;
model.reactions.FDHH2.lower_bound = 0; model.reactions.FDHH2.upper_bound = 0
model.reactions.FDHFDNADPH.lower_bound = -1000; model.reactions.FDHFDNADPH.upper_bound = 1000
else:
model.reactions.FDH7.upper_bound = 0; model.reactions.FDH7.lower_bound = 0;
model.reactions.FDHH2.lower_bound = -1000; model.reactions.FDHH2.upper_bound = 1000
model.reactions.FDHFDNADPH.lower_bound = 0; model.reactions.FDHFDNADPH.upper_bound = 0
if HYD == 0:
model.reactions.HYDFDN.lower_bound = -1000; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = 0; model.reactions.HYDFDN2r.upper_bound = 0 # the Fd + NADPH hydrogenase
else:
model.reactions.HYDFDN.lower_bound = 0; model.reactions.HYDFDN.upper_bound = 0 # reversed flux! The Fd + NADH hydrogenase
model.reactions.HYDFDN2r.lower_bound = -1000; model.reactions.HYDFDN2r.upper_bound = 1000 # the Fd + NADPH hydrogenase
if MTHFD == 0:
model.reactions.MTHFD.upper_bound = 0; model.reactions.MTHFD_alt.upper_bound = 1000
else:
model.reactions.MTHFD.upper_bound = 1000; model.reactions.MTHFD_alt.upper_bound = 0
if MTHFR == 0:
model.reactions.MTHFR5.upper_bound = 0; model.reactions.MTHFR5_alt.upper_bound = 1000
else:
model.reactions.MTHFR5.upper_bound = 1000; model.reactions.MTHFR5_alt.upper_bound = 0
try:
pfbaSol = cobra.flux_analysis.parsimonious.optimize_minimal_flux(model)
results[FDH].append(str(round(abs(pfbaSol.fluxes['DM_3hbcoa_c']),3)) + ' (' + str(round(pfbaSol.fluxes['FRNDPR2r_1'],3)) + ')' )
b = show_map(pfbaSol, escher_file)
b.save_html('../Figures/3hbcoa_synthesis/FDH='+str(FDH)+'_HYD='+str(HYD)+'_MTHFD='+str(MTHFD)+'_MTHFR='+str(MTHFR)+'.html',overwrite=True)
except:
results[FDH].append('NP')
iterables = [['NADH', '2 NADH + Fd'], ['NADH', 'NADPH'],['Fd + NADH','Fd + NADPH']]
index = pd.MultiIndex.from_product(iterables, names=['MTHFR', 'MTHFD','FDH \ HYD'])
pd.DataFrame(results, columns=index,index=['Fd','FD+NADPH','H2'])
Explanation: Effect of the redox on BHB yield
Since the enzymes affect ATP yield coupled to acetogenesis, do they also affect product yield? Here we perform the same simulations as above but using beta-hydroxybutyrate synthesis as the objective function
End of explanation |
14,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiprocessing and multithreading
Parallelism in python
Step1: On Windows
Step2: Data parallelism versus task parallelism
Multithreading versus multiple threads
The global interpreter lock
Processes versus threads
Shared memory and shared objects
Shared objects
Step3: Manager and proxies
Step4: See
Step5: Issues
Step6: Queue and Pipe
Step7: Synchronization with Lock and Event
Step8: High-level task parallelism
Step9: Variants
Step12: Issues
Step13: Issues
Step14: EXERCISE
Step15: Programming efficiency
Step16: Strives for natural programming constructs in parallel code
Step17: Programming models and hierarchical computing
Step18: Pool caching
Not covered
Step19: Even easier
Step20: Not covered | Python Code:
%%file multihello.py
'''hello from another process
'''
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('world',))
p.start()
p.join()
# EOF
!python2.7 multihello.py
Explanation: Multiprocessing and multithreading
Parallelism in python
End of explanation
if __name__ == '__main__':
from multiprocessing import freeze_support
freeze_support()
# Then, do multiprocessing stuff...
Explanation: On Windows: multiprocessing spawns with subprocess.Popen
End of explanation
%%file sharedobj.py
'''demonstrate shared objects in multiprocessing
'''
from multiprocessing import Process, Value, Array
def f(n, a):
n.value = 3.1415927
for i in range(len(a)):
a[i] = -a[i]
if __name__ == '__main__':
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=f, args=(num, arr))
p.start()
p.join()
print num.value
print arr[:]
# EOF
!python2.7 sharedobj.py
Explanation: Data parallelism versus task parallelism
Multithreading versus multiple threads
The global interpreter lock
Processes versus threads
Shared memory and shared objects
Shared objects: Value and Array
End of explanation
%%file sharedproxy.py
'''demonstrate sharing objects by proxy through a manager
'''
from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=f, args=(d, l))
p.start()
p.join()
print d
print l
# EOF
!python2.7 sharedproxy.py
Explanation: Manager and proxies
End of explanation
%%file numpyshared.py
'''demonstrating shared objects using numpy and ctypes
'''
import multiprocessing as mp
from multiprocessing import sharedctypes
from numpy import ctypeslib
def fill_arr(arr_view, i):
arr_view.fill(i)
if __name__ == '__main__':
ra = sharedctypes.RawArray('i', 4)
arr = ctypeslib.as_array(ra)
arr.shape = (2, 2)
p1 = mp.Process(target=fill_arr, args=(arr[:1, :], 1))
p2 = mp.Process(target=fill_arr, args=(arr[1:, :], 2))
p1.start(); p2.start()
p1.join(); p2.join()
print arr
!python2.7 numpyshared.py
Explanation: See: https://docs.python.org/2/library/multiprocessing.html
Working in C with ctypes and numpy
End of explanation
%%file mprocess.py
'''demonstrate the process claas
'''
import multiprocessing as mp
from time import sleep
from random import random
def worker(num):
sleep(2.0 * random())
name = mp.current_process().name
print "worker {},name:{}".format(num, name)
if __name__ == '__main__':
master = mp.current_process().name
print "Master name: {}".format(master)
for i in range(2):
p = mp.Process(target=worker, args=(i,))
p.start()
# Close all child processes spawn
[p.join() for p in mp.active_children()]
!python2.7 mprocess.py
Explanation: Issues: threading and locks
Low-level task parallelism: point to point communication
Process
End of explanation
%%file queuepipe.py
'''demonstrate queues and pipes
'''
import multiprocessing as mp
import pickle
def qworker(q):
v = q.get() # blocking!
print "queue worker got '{}' from parent".format(v)
def pworker(p):
import pickle # needed for encapsulation
msg = 'hello hello hello'
print "pipe worker sending {!r} to parent".format(msg)
p.send(msg)
v = p.recv()
print "pipe worker got {!r} from parent".format(v)
print "unpickled to {}".format(pickle.loads(v))
if __name__ == '__main__':
q = mp.Queue()
p = mp.Process(target=qworker, args=(q,))
p.start() # blocks at q.get()
v = 'python rocks!'
print "putting '{}' on queue".format(v)
q.put(v)
p.join()
print ''
# The two ends of the pipe: the parent and the child connections
p_conn, c_conn = mp.Pipe()
p = mp.Process(target=pworker, args=(c_conn,))
p.start()
msg = pickle.dumps([1,2,3],-1)
print "got {!r} from child".format(p_conn.recv())
print "sending {!r} to child".format(msg)
p_conn.send(msg)
import datetime
print "\nfinished: {}".format(datetime.date.today())
p.join()
!python2.7 queuepipe.py
Explanation: Queue and Pipe
End of explanation
%%file multi_sync.py
'''demonstrating locks
'''
import multiprocessing as mp
def print_lock(lk, i):
name = mp.current_process().name
lk.acquire()
for j in range(5):
print i, "from process", name
lk.release()
if __name__ == '__main__':
lk = mp.Lock()
ps = [mp.Process(target=print_lock, args=(lk,i)) for i in range(5)]
[p.start() for p in ps]
[p.join() for p in ps]
!python2.7 multi_sync.py
'''events
'''
import multiprocessing as mp
def wait_on_event(e):
name = mp.current_process().name
e.wait()
print name, "finished waiting"
if __name__ == '__main__':
e = mp.Event()
ps = [mp.Process(target=wait_on_event, args=(e,)) for i in range(10)]
[p.start() for p in ps]
print "e.is_set()", e.is_set()
#raw_input("press any key to set event")
e.set()
[p.join() for p in ps]
Explanation: Synchronization with Lock and Event
End of explanation
import multiprocessing as mp
def random_mean(x):
import numpy as np
return round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map(random_mean, range(1,10))
print results
print p.apply(random_mean, [100])
p.close()
p.join()
Explanation: High-level task parallelism: collective communication
The task Pool
pipes (apply) and map
End of explanation
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.imap_unordered(random_mean_count, range(1,10))
print "[",
for i in results:
print i,
if abs(i) <= 1.0:
print "...] QUIT"
break
list(results)
p.close()
p.join()
import multiprocessing as mp
def random_mean_count(x):
import numpy as np
return x + round(np.mean(np.random.randint(-x,x+1,10000)), 3)
if __name__ == '__main__':
# create a pool with cpu_count() procsesses
p = mp.Pool()
results = p.map_async(random_mean_count, range(1,10))
print "Waiting .",
i = 0
while not results.ready():
if not i%4000:
print ".",
i += 1
print results.get()
print "\n", p.apply_async(random_mean_count, [100]).get()
p.close()
p.join()
Explanation: Variants: blocking, iterative, unordered, and asynchronous
End of explanation
import numpy as np
def walk(x, n=100, box=.5, delta=.2):
"perform a random walk"
w = np.cumsum(x + np.random.uniform(-delta,delta,n))
w = np.where(abs(w) > box)[0]
return w[0] if len(w) else n
N = 10
# run N trials, all starting from x=0
pwalk = np.vectorize(walk)
print pwalk(np.zeros(N))
# run again, using list comprehension instead of ufunc
print [walk(0) for i in range(N)]
# run again, using multiprocessing's map
import multiprocessing as mp
p = mp.Pool()
print p.map(walk, [0]*N)
%%file state.py
some good state utilities
def check_pickle(x, dill=False):
"checks the pickle across a subprocess"
import pickle
import subprocess
if dill:
import dill as pickle
pik = "dill"
else:
pik = "pickle"
fail = True
try:
_x = pickle.dumps(x)
fail = False
finally:
if fail:
print "DUMP FAILED"
msg = "python -c import {0}; print {0}.loads({1})".format(pik,repr(_x))
print "SUCCESS" if not subprocess.call(msg.split(None,2)) else "LOAD FAILED"
def random_seed(s=None):
"sets the seed for calls to 'random()'"
import random
random.seed(s)
try:
from numpy import random
random.seed(s)
except:
pass
return
def random_state(module='random', new=False, seed='!'):
return a (optionally manually seeded) random generator
For a given module, return an object that has random number generation (RNG)
methods available. If new=False, use the global copy of the RNG object.
If seed='!', do not reseed the RNG (using seed=None 'removes' any seeding).
If seed='*', use a seed that depends on the process id (PID); this is useful
for building RNGs that are different across multiple threads or processes.
import random
if module == 'random':
rng = random
elif not isinstance(module, type(random)):
# convienence for passing in 'numpy'
if module == 'numpy': module = 'numpy.random'
try:
import importlib
rng = importlib.import_module(module)
except ImportError:
rng = __import__(module, fromlist=module.split('.')[-1:])
elif module.__name__ == 'numpy': # convienence for passing in numpy
from numpy import random as rng
else: rng = module
_rng = getattr(rng, 'RandomState', None) or \
getattr(rng, 'Random') # throw error if no rng found
if new:
rng = _rng()
if seed == '!': # special case: don't reset the seed
return rng
if seed == '*': # special case: random seeding for multiprocessing
try:
try:
import multiprocessing as mp
except ImportError:
import processing as mp
try:
seed = mp.current_process().pid
except AttributeError:
seed = mp.currentProcess().getPid()
except:
seed = 0
import time
seed += int(time.time()*1e6)
# set the random seed (or 'reset' with None)
rng.seed(seed)
return rng
# EOF
Explanation: Issues: random number generators
End of explanation
import multiprocess
print multiprocess.Pool().map(lambda x:x**2, range(10))
Explanation: Issues: serialization
Better serialization: multiprocess
End of explanation
%%file runppft.py
'''demonstrate ppft
'''
import ppft
def squared(x):
return x*x
server = ppft.Server() # can take 'localhost:8000' or remote:port
result = server.submit(squared, (5,))
result.wait()
print result.finished
print result()
!python2.7 runppft.py
Explanation: EXERCISE: Try several variants of looping patterns to see if you can speed up a toy password cracker.
See: 'exercise'
Code-based versus object-based serialization: pp(ft)
End of explanation
%%file allpool.py
'''demonstrate pool API
'''
import pathos
def sum_squared(x,y):
return (x+y)**2
x = range(5)
y = range(0,10,2)
if __name__ == '__main__':
sp = pathos.pools.SerialPool()
pp = pathos.pools.ParallelPool()
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
for pool in [sp,pp,mp,tp]:
print pool.map(sum_squared, x, y)
pool.close()
pool.join()
!python2.7 allpool.py
Explanation: Programming efficiency: pathos
Multi-argument map functions
Unified API for threading, multiprocessing, and serial and parallel python (pp)
End of explanation
from itertools import izip
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
import math
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def sleep_add1(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+1
def sleep_add2(x):
from time import sleep
if x < 4: sleep(x/10.0)
return x+2
def test_with_multipool(Pool):
inputs = range(10)
with Pool() as pool1:
res1 = pool1.amap(sleep_add1, inputs)
with Pool() as pool2:
res2 = pool2.amap(sleep_add2, inputs)
with Pool() as pool3:
for number, prime in izip(PRIMES, pool3.imap(is_prime, PRIMES)):
assert prime if number != PRIMES[-1] else not prime
assert res1.get() == [i+1 for i in inputs]
assert res2.get() == [i+2 for i in inputs]
print "OK"
if __name__ == '__main__':
from pathos.pools import ProcessPool
test_with_multipool(ProcessPool)
Explanation: Strives for natural programming constructs in parallel code
End of explanation
import pathos
from math import sin, cos
if __name__ == '__main__':
mp = pathos.pools.ProcessPool()
tp = pathos.pools.ThreadPool()
print mp.amap(tp.map, [sin, cos], [range(3),range(3)]).get()
mp.close(); tp.close()
mp.join(); tp.join()
Explanation: Programming models and hierarchical computing
End of explanation
localhost>$ ppserver.py -p 8000
Explanation: Pool caching
Not covered: IPython.parallel and scoop
EXERCISE: Let's take another swing at Monte Carlo betting. You'll want to focus on roll.py, trials.py and optimize.py. Can you speed things up with careful placement of a Pool? Are there small modifications to the code that would allow hierarchical parallelism? Can we speed up the calculation, or does parallel computing lose to spin-up overhead? Where are we now hitting the wall?
See: 'solution'
Remote execution
Easy: the pp.Server
End of explanation
>>> def squared(x):
... return x**2
...
>>> import pathos
>>> pool = pathos.pools.ParallelPool(nodes=1, servers=('localhost:8000',))
>>> results = pool.map(squared, range(100))
>>> print pathos.pp.stats()
Job execution statistics:
job count | % of all jobs | job time sum | time per job | job server
65 | 65.00 | 0.2004 | 0.003083 | localhost:8000
35 | 35.00 | 0.0538 | 0.001538 | local
Time elapsed since server creation 21.2711749077
0 active tasks, 1 cores
>>> pool.close()
>>> pool.join()
Explanation: Even easier: Pool().server in pathos
End of explanation
import pathos
import sys
rhost = 'localhost'
rport = 23
if __name__ == '__main__':
tunnel = pathos.secure.Tunnel()
lport = tunnel.connect(rhost, rport)
print 'SSH Tunnel to:', rhost
print 'Remote port:', rport
print 'Local port:', lport
print 'Press <Enter> to disconnect'
sys.stdin.readline()
tunnel.disconnect()
import pathos
launcher = pathos.secure.Pipe()
config = launcher(command='hostname', rhost='localhost', background=False)
launcher.launch()
print launcher.response()
Explanation: Not covered: rpyc, pyro, and zmq
Related: secure authentication with ssh
pathos.secure: connection and tunnel
End of explanation |
14,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sampyl Examples
Here I will have some examples showing how to use Sampyl. This is for version 0.2.2. Let's import it and get started. Sampyl is a Python package used to sample from probability distributions using Markov Chain Monte Carlo (MCMC). This is most useful when sampling from the posterior distribution of a Bayesian model.
Every sampler provided by Sampyl works the same way. Define $ \log P(\theta) $ as a function, then pass it to the sampler class. The class returns a sampler object, which you can then use to sample from $P(\theta)$. For samplers which use the gradient, $\nabla_{\theta} \log P(\theta)$, Sampyl uses autograd to automatically calculate the gradients. However, you can pass in your own $\nabla_{\theta} \log P(\theta)$ functions.
Starting out simple, let's sample from a normal distribution.
Step1: A normal distribution with mean $\mu$ and variance $\sigma^2$ is defined as
Step2: First we'll use a Metropolis-Hastings sampler. Each sampler requires a $\log{P(\theta)}$ function and a starting state. We have included a function to calculate the maximum a posteriori (MAP) to find the peak of the distribution for use as the starting state. Then you call the sampler and a chain of samples is returned.
Step3: We can retrieve the chain by accessing the attributes defined by the parameter name(s) of logp.
Step4: Here we have sampled from a normal distribution with a mean of 3, indicated with the dashed vertical line.
There is also a No-U-Turn Sampler (NUTS), which avoids the random-walk nature of Metropolis samplers. NUTS uses the gradient of $\log{P(\theta)}$ to make intelligent state proposals. You'll notice here that we don't pass in any information about the gradient. Instead, it is calculated automatically with autograd.
Step5: Bayesian estimation of phone call rates
Let's try something a little more complicated. Let's say you run a business and you put an advertisement in the paper. Then, to judge the effectiveness of the ad, you want to compare the number of incoming phone calls per hour before and after the placement of the add. Then we can build a Bayesian model using a Poisson likelihood with exponential priors for $\lambda_1$ and $\lambda_2$.
\begin{align}
P(\lambda_1, \lambda_2 \mid D) &\propto P( D \mid \lambda_1, \lambda_2)\, P(\lambda_1)\, P(\lambda_2) \
P( D \mid \lambda_1, \lambda_2) &\sim \mathrm{Poisson}(D\mid\lambda_1)\,\mathrm{Poisson}(D\mid\lambda_2) \
P(\lambda_1) &\sim \mathrm{Exp}(1) \
P(\lambda_2) &\sim \mathrm{Exp}(1)
\end{align}
This analysis method is known as Bayesian inference or Bayesian estimation. We want to know likely values for $\lambda_1$ and $\lambda_2$. This information is contained in the posterior distribution $P(\lambda_1, \lambda_2 \mid D)$. To infer values for $\lambda_1$ and $\lambda_2$, we can sample from the posterior using our MCMC samplers.
Step6: Sampling returns a numpy record array which you can use to access samples by name. Variable names are taken directly from the argument list of logp.
Step7: Now to see if there is a significant difference between the two days. We can find the difference $\delta = \lambda_2 - \lambda_1$, then find the probability that $\delta > 0$.
Step8: There true difference in rates is two per hour, marked with the dashed line. Our posterior is showing an effect, but our best estimate is that the rate increased by only one call per hour. The 95% credible region is {-0.735 2.743} which idicates that there is a 95% probability that the true effect lies with the region, as it indeed does.
We can also use NUTS to sample from the posterior.
Step9: Linear models too
When you build larger models, it would be cumbersome to have to include every parameter as an argument in the logp function. To avoid this, you can declare the size of variables when passing in the starting state.
For instance, with a linear model it would be great to pass the coefficients as one parameter. First, we'll make some fake data, then infer the coefficients.
Step10: And using NUTS too.
Step11: Using one logp function for both logp and gradient
You can also use one logp function that returns both the logp value and the gradient. To let the samplers know about this, set grad_logp = True. I'm also using one argument theta as the parameter which contains the five $\beta$ coefficients and $\sigma$.
Step12: Sampling in parallel
We can make use of our multicore CPUs by running chains in parallel. To do this, simply request the number of chains you want when you call sample | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import sampyl as smp
from sampyl import np
# Autograd throws some warnings that are useful, but this is
# a demonstration, so I'll squelch them.
import warnings
warnings.filterwarnings('ignore')
Explanation: Sampyl Examples
Here I will have some examples showing how to use Sampyl. This is for version 0.2.2. Let's import it and get started. Sampyl is a Python package used to sample from probability distributions using Markov Chain Monte Carlo (MCMC). This is most useful when sampling from the posterior distribution of a Bayesian model.
Every sampler provided by Sampyl works the same way. Define $ \log P(\theta) $ as a function, then pass it to the sampler class. The class returns a sampler object, which you can then use to sample from $P(\theta)$. For samplers which use the gradient, $\nabla_{\theta} \log P(\theta)$, Sampyl uses autograd to automatically calculate the gradients. However, you can pass in your own $\nabla_{\theta} \log P(\theta)$ functions.
Starting out simple, let's sample from a normal distribution.
End of explanation
mu, sig = 3, 2
def logp(x):
return -np.log(sig) - (x - mu)**2/(2*sig**2)
Explanation: A normal distribution with mean $\mu$ and variance $\sigma^2$ is defined as:
$$
P(x,\mu, \sigma) = \frac{1}{\sigma \sqrt{2 \pi}} \; \mathrm{Exp}\left( \frac{-(x - \mu)^2}{2\sigma^2} \right)
$$
For numerical stability, it is typically better to deal with log probabilities, $\log{P(\theta)}$. Then for the normal distribution with known mean and variance,
$$
\log{P(x \mid \mu, \sigma)} = -\log{\sigma} - \frac{(x - \mu)^2}{2\sigma^2}
$$
where we can drop constant terms since the MCMC samplers only require something proportional to $\log{P(\theta)}$. We can easily write this as a Python function.
End of explanation
start = smp.find_MAP(logp, {'x':1.})
metro = smp.Metropolis(logp, start)
chain = metro(10000, burn=2000, thin=4)
Explanation: First we'll use a Metropolis-Hastings sampler. Each sampler requires a $\log{P(\theta)}$ function and a starting state. We have included a function to calculate the maximum a posteriori (MAP) to find the peak of the distribution for use as the starting state. Then you call the sampler and a chain of samples is returned.
End of explanation
plt.plot(chain.x)
_ = plt.hist(chain.x, bins=30)
_ = plt.vlines(mu, 0, 250, linestyles='--')
Explanation: We can retrieve the chain by accessing the attributes defined by the parameter name(s) of logp.
End of explanation
nuts = smp.NUTS(logp, start)
chain = nuts(2100, burn=100)
plt.plot(chain)
_ = plt.hist(chain.x, bins=30)
_ = plt.vlines(mu, 0, 250, linestyles='--')
Explanation: Here we have sampled from a normal distribution with a mean of 3, indicated with the dashed vertical line.
There is also a No-U-Turn Sampler (NUTS), which avoids the random-walk nature of Metropolis samplers. NUTS uses the gradient of $\log{P(\theta)}$ to make intelligent state proposals. You'll notice here that we don't pass in any information about the gradient. Instead, it is calculated automatically with autograd.
End of explanation
# Fake data for the day before and after placing the ad.
# We'll make the calls increase by 2 an hour. Record data for each
# hour over two work days.
before = np.random.poisson(7, size=16)
after = np.random.poisson(9, size=16)
# Define the log-P function here
def logp(λ1, λ2):
model = smp.Model()
# Poisson log-likelihoods
model.add(smp.poisson(before, rate=λ1),
smp.poisson(after, rate=λ2))
# Exponential log-priors for rate parameters
model.add(smp.exponential(λ1),
smp.exponential(λ2))
return model()
start = smp.find_MAP(logp, {'λ1':1., 'λ2':1.})
sampler = smp.Metropolis(logp, start)
chain = sampler(10000, burn=2000, thin=4)
Explanation: Bayesian estimation of phone call rates
Let's try something a little more complicated. Let's say you run a business and you put an advertisement in the paper. Then, to judge the effectiveness of the ad, you want to compare the number of incoming phone calls per hour before and after the placement of the add. Then we can build a Bayesian model using a Poisson likelihood with exponential priors for $\lambda_1$ and $\lambda_2$.
\begin{align}
P(\lambda_1, \lambda_2 \mid D) &\propto P( D \mid \lambda_1, \lambda_2)\, P(\lambda_1)\, P(\lambda_2) \
P( D \mid \lambda_1, \lambda_2) &\sim \mathrm{Poisson}(D\mid\lambda_1)\,\mathrm{Poisson}(D\mid\lambda_2) \
P(\lambda_1) &\sim \mathrm{Exp}(1) \
P(\lambda_2) &\sim \mathrm{Exp}(1)
\end{align}
This analysis method is known as Bayesian inference or Bayesian estimation. We want to know likely values for $\lambda_1$ and $\lambda_2$. This information is contained in the posterior distribution $P(\lambda_1, \lambda_2 \mid D)$. To infer values for $\lambda_1$ and $\lambda_2$, we can sample from the posterior using our MCMC samplers.
End of explanation
print(sampler.var_names)
plt.plot(chain.λ1)
plt.plot(chain.λ2)
Explanation: Sampling returns a numpy record array which you can use to access samples by name. Variable names are taken directly from the argument list of logp.
End of explanation
delta = chain.λ2 - chain.λ1
_ = plt.hist(delta, bins=30)
_ = plt.vlines(2, 0, 250, linestyle='--')
p = np.mean(delta > 0)
effect = np.mean(delta)
CR = np.percentile(delta, (2.5, 97.5))
print("{:.3f} probability the rate of phone calls increased".format(p))
print("delta = {:.3f}, 95% CR = {{{:.3f} {:.3f}}}".format(effect, *CR))
Explanation: Now to see if there is a significant difference between the two days. We can find the difference $\delta = \lambda_2 - \lambda_1$, then find the probability that $\delta > 0$.
End of explanation
nuts = smp.NUTS(logp, start)
chain = nuts.sample(2100, burn=100)
_ = plt.plot(chain.λ1)
_ = plt.plot(chain.λ2)
delta = chain.λ2 - chain.λ1
_ = plt.hist(delta, bins=30)
_ = plt.vlines(2, 0, 250, linestyle='--')
p = np.mean(delta > 0)
effect = np.mean(delta)
CR = np.percentile(delta, (2.5, 97.5))
print("{:.3f} probability the rate of phone calls increased".format(p))
print("delta = {:.3f}, 95% CR = {{{:.3f} {:.3f}}}".format(effect, *CR))
Explanation: There true difference in rates is two per hour, marked with the dashed line. Our posterior is showing an effect, but our best estimate is that the rate increased by only one call per hour. The 95% credible region is {-0.735 2.743} which idicates that there is a 95% probability that the true effect lies with the region, as it indeed does.
We can also use NUTS to sample from the posterior.
End of explanation
# Number of data points
N = 200
# True parameters
sigma = 1
true_B = np.array([2, 1, 4])
# Simulated features, including a constant
X = np.ones((N, len(true_B)))
X[:,1:] = np.random.rand(N, 2)*2
# Simulated outcomes with normally distributed noise
y = np.dot(X, true_B) + np.random.randn(N)*sigma
data = np.ones((N, len(true_B) + 1))
data[:, :-1] = X
data[:, -1] = y
fig, axes = plt.subplots(figsize=(7,4),ncols=2)
for i, ax in enumerate(axes):
ax.scatter(X[:,i+1], y)
axes[0].set_ylabel('y')
axes[0].set_xlabel('X1')
axes[1].set_xlabel('X2')
fig.savefig('linear_model_data.png')
# Here, β is a length 3 array of coefficients
def logp(β, sig):
model = smp.Model()
# Estimate from our data and coefficients
y_hat = np.dot(X, β)
# Add log-likelihood
model.add(smp.normal(y, mu=y_hat, sig=sig))
# Add prior for estimate error
model.add(smp.exponential(sig))
# Uniform priors on coefficients
model.add(smp.uniform(β, lower=-100, upper=100))
return model()
start = smp.find_MAP(logp, {'β': np.ones(3), 'sig': 1.},
bounds={'β':(-5, 10), 'sig':(0.01, None)})
sampler = smp.Metropolis(logp, start)
chain = sampler(20000, burn=5000, thin=4)
_ = plt.plot(chain.β)
Explanation: Linear models too
When you build larger models, it would be cumbersome to have to include every parameter as an argument in the logp function. To avoid this, you can declare the size of variables when passing in the starting state.
For instance, with a linear model it would be great to pass the coefficients as one parameter. First, we'll make some fake data, then infer the coefficients.
End of explanation
start = smp.find_MAP(logp, {'β': np.ones(3), 'sig': 1.})
nuts = smp.NUTS(logp, start)
chain = nuts.sample(2100, burn=100)
fig, axes = plt.subplots(figsize=(8,5), nrows=2, ncols=2)
for i, (row, param) in enumerate(zip(axes, [chain.β, chain.sig])):
row[0].plot(param)
row[0].set_ylabel('Sample value')
#row[0].set_xlabel('Sample')
row[0].set_title(['β', 'sig'][i])
row[1].set_title(['β', 'sig'][i])
if len(param.shape) > 1:
for each in param.T:
row[1].hist(each, alpha=0.8, histtype='stepfilled')
row[1].set_yticklabels('')
row[1].vlines([2,1,4], 0, 600, linestyles='--', alpha=0.5)
else:
row[1].hist(param, alpha=0.8, histtype='stepfilled')
row[1].set_yticklabels('')
row[1].vlines(1, 0, 600, linestyles='--', alpha=0.5)
#row[1].set_xlabel('Sample value')
fig.tight_layout(pad=0.1, h_pad=1.5, w_pad=1)
fig.savefig('linear_model_posterior.png')
Explanation: And using NUTS too.
End of explanation
from autograd import grad
grads = [grad(logp, 0), grad(logp, 1)]
def single_logp(theta):
b, sig = theta[:3], theta[-1]
logp_val = logp(b, sig)
grad_val = np.hstack([each(b, sig) for each in grads])
return logp_val, grad_val
start = {'theta': np.ones(4)}
nuts = smp.NUTS(single_logp, start, grad_logp=True)
chain = nuts.sample(2000, burn=1000, thin=2)
_ = plt.plot(chain.theta[:, :4])
Explanation: Using one logp function for both logp and gradient
You can also use one logp function that returns both the logp value and the gradient. To let the samplers know about this, set grad_logp = True. I'm also using one argument theta as the parameter which contains the five $\beta$ coefficients and $\sigma$.
End of explanation
start = smp.find_MAP(logp, {'β': np.ones(3), 'sig': 1.})
nuts = smp.NUTS(logp, start)
chains = nuts.sample(1100, burn=100, n_chains=2)
fig, axes = plt.subplots(figsize=(10,3), ncols=2)
for ax, chain in zip(axes, chains):
_ = ax.plot(chain.β)
Explanation: Sampling in parallel
We can make use of our multicore CPUs by running chains in parallel. To do this, simply request the number of chains you want when you call sample: nuts.sample(1000, n_chains=4). Each chain is given its own process and the OS decides how to run the processes. Typically this means that each process will run on its own core. So, if you have four cores and four chains, they will all run in parallel. But, if you have two cores, only two will run at a time.
End of explanation |
14,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Texts in Python
Below is my solution to the exercises posed in the notebook 01-WorkingWithTexts.
As a reminder, you were asked to
Step1: First, create variables that split the strings into lists of words. I'll print out the first 10 words to make sure everything looks good.
Step2: To count the number of words, simply take the length of each list and print the result. You can add print statements to make the output look prett.
Step3: To count the number of words that are in title case we can use list comprehension.
Step4: To get the average length of the words in each novel we first transform each word into its length, using the len function and list comprehension. I'll print out the first 10 word length to make sure we did it right. We can then sum the list and divide by the total number of words.
Step5: To count the number of long words I create two new variables and use list comprehension to keep only words longer than 7 characters. I then divide the length of these lists by the total number of words, to get proportion. | Python Code:
austen_string = open('../Data/Austen_PrideAndPrejudice.txt', encoding='utf-8').read()
alcott_string = open('../Data/Alcott_GarlandForGirls.txt', encoding='utf-8').read()
Explanation: Working with Texts in Python
Below is my solution to the exercises posed in the notebook 01-WorkingWithTexts.
As a reminder, you were asked to:
Run the cell below to read in the text of "Pride and Prejudice" and assign it to the variable "austen_string" and read in the text of Louisa May Alcott's "A Garland for Girls," a children's book, and assugn it to the variable "aclott_string." With these variables, print the answer to the following questions
How many words are in each novel?
How many words in each novel appear in title case?
What is the approximate average word length in each novel?
How many words longer than 7 characters are in each novel? (don't worry about punctuation for now)
What proportion of the total words are the long words in each novel?
End of explanation
austen_words = austen_string.split()
alcott_words = alcott_string.split()
print(austen_words[:10])
print(alcott_words[:10])
Explanation: First, create variables that split the strings into lists of words. I'll print out the first 10 words to make sure everything looks good.
End of explanation
#How many words are in each novel?
print("Number of words in Pride and Prejudice:")
print(len(austen_words))
print("Number of words in A Garland for Girls")
print(len(alcott_words))
Explanation: To count the number of words, simply take the length of each list and print the result. You can add print statements to make the output look prett.
End of explanation
#How many words in each novel appear in title case?
print("Number of words in title case in Pride and Prejudice:")
print(len([word for word in austen_words if word.istitle()]))
print("Number of words in title case in A Garland for Girls:")
print(len([word for word in alcott_words if word.istitle()]))
Explanation: To count the number of words that are in title case we can use list comprehension.
End of explanation
austen_word_length = [len(word) for word in austen_words]
print(austen_word_length[:10])
alcott_word_length = [len(word) for word in alcott_words]
print(alcott_word_length[:10])
print("Average word length in Pride and Prejudice:")
print(sum(austen_word_length)/len(austen_word_length))
print("Average word length in A Garland for Girls:")
print(sum(alcott_word_length)/len(alcott_word_length))
Explanation: To get the average length of the words in each novel we first transform each word into its length, using the len function and list comprehension. I'll print out the first 10 word length to make sure we did it right. We can then sum the list and divide by the total number of words.
End of explanation
## How many words longer than 7 characters are in each novel? (don't worry about punctuation for now)
## What proportion of the total words are the long words in each novel
austen_long = [word for word in austen_words if len(word)>7]
print("Number of long words in Pride and Prejudice:")
print(len(austen_long))
alcott_long = [word for word in alcott_words if len(word)>7]
print("Number of long words in A Garland for Girls:")
print(len(alcott_long))
print("Proportion of words that are long in Pride and Prejudice:")
print(len(austen_long)/len(austen_words))
print("Proportion of words that are long in A Garland for Girls:")
print(len(alcott_long)/len(alcott_words))
dir()
locals()
Explanation: To count the number of long words I create two new variables and use list comprehension to keep only words longer than 7 characters. I then divide the length of these lists by the total number of words, to get proportion.
End of explanation |
14,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Styling
This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
You can apply conditional formatting, the visual styling of a DataFrame
depending on the data within, by using the DataFrame.style property.
This is a property that returns a Styler object, which has
useful methods for formatting and displaying DataFrames.
The styling is accomplished using CSS.
You write "style functions" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS "attribute
Step1: Here's a boring example of rendering a DataFrame, without any (visible) styles
Step2: Note
Step4: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.
Let's write a simple style function that will color negative numbers red and positive numbers black.
Step5: In this case, the cell's style depends only on it's own value.
That means we should use the Styler.applymap method which works elementwise.
Step6: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column.
We can't use .applymap anymore since that operated elementwise.
Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.
Step7: In this case the input is a Series, one column at a time.
Notice that the output shape of highlight_max matches the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
Step8: Above we used Styler.apply to pass in each column one at a time.
<span style="background-color
Step9: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
Step10: Building Styles Summary
Style functions should return strings with one or more CSS attribute
Step11: For row and column slicing, any valid indexer to .loc will work.
Step12: Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Finer Control
Step13: Use a dictionary to format specific columns.
Step14: Or pass in a callable (or dictionary of callables) for more flexible handling.
Step15: You can format the text displayed for missing values by na_rep.
Step16: These formatting techniques can be used in combination with styling.
Step17: Builtin styles
Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself.
Step18: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
Step19: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
Step20: There's also .highlight_min and .highlight_max.
Step21: Use Styler.set_properties when the style doesn't actually depend on the values.
Step22: Bar charts
You can include "bar charts" in your DataFrame.
Step23: New in version 0.20.0 is the ability to customize further the bar chart
Step26: The following example aims to give a highlight of the behavior of the new align options
Step27: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
Step28: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Other Options
You've seen a few methods for data-driven styling.
Styler also provides a few other options for styles that don't depend on the data.
precision
captions
table-wide styles
missing values representation
hiding the index or columns
Each of these can be specified in two ways
Step29: Or through a set_precision method.
Step30: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Captions
Regular table captions can be added in a few ways.
Step31: Table styles
The next option you have are "table styles".
These are styles that apply to the table as a whole, but don't look at the data.
Certain sytlings, including pseudo-selectors like
Step32: table_styles should be a list of dictionaries.
Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector.
Recall that all the styles are already attached to an id, unique to
each Styler. This selector is in addition to that id.
The value for props should be a list of tuples of ('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand.
We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.
Missing values
You can control the default missing values representation for the entire table through set_na_rep method.
Step33: Hiding the Index or Columns
The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.
Step34: CSS classes
Certain CSS classes are attached to cells.
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
col_heading
col<n> where n is the numeric position of the column
level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and performance isn't great; this is intended for summary DataFrames
You can only style the values, not the index or columns
You can only apply styles, you can't insert new HTML entities
Some of these will be addressed in the future.
Terms
Style function
Step35: Export to Excel
New in version 0.20.0
<span style="color
Step36: A screenshot of the output
Step37: We'll use the following template
Step38: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
Step39: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
Step40: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
Step41: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
Step42: Here's the template structure
Step43: See the template in the GitHub repo for more details. | Python Code:
import matplotlib.pyplot
# We have this here to trigger matplotlib's font cache stuff.
# This cell is hidden from the output
import pandas as pd
import numpy as np
np.random.seed(24)
df = pd.DataFrame({'A': np.linspace(1, 10, 10)})
df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],
axis=1)
df.iloc[3, 3] = np.nan
df.iloc[0, 2] = np.nan
Explanation: Styling
This document is written as a Jupyter Notebook, and can be viewed or downloaded here.
You can apply conditional formatting, the visual styling of a DataFrame
depending on the data within, by using the DataFrame.style property.
This is a property that returns a Styler object, which has
useful methods for formatting and displaying DataFrames.
The styling is accomplished using CSS.
You write "style functions" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS "attribute: value" pairs for the values.
These functions can be incrementally passed to the Styler which collects the styles before rendering.
Building styles
Pass your style functions into one of the following methods:
Styler.applymap: elementwise
Styler.apply: column-/row-/table-wise
Both of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame in a certain way.
Styler.applymap works through the DataFrame elementwise.
Styler.apply passes each column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument.
For columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.
For Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value pair.
For Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair.
Let's see some examples.
End of explanation
df.style
Explanation: Here's a boring example of rendering a DataFrame, without any (visible) styles:
End of explanation
df.style.highlight_null().render().split('\n')[:10]
Explanation: Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string.
The above output looks very similar to the standard DataFrame HTML representation. But we've done some work behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.
End of explanation
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
Explanation: The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).
When writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.
Let's write a simple style function that will color negative numbers red and positive numbers black.
End of explanation
s = df.style.applymap(color_negative_red)
s
Explanation: In this case, the cell's style depends only on it's own value.
That means we should use the Styler.applymap method which works elementwise.
End of explanation
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
df.style.apply(highlight_max)
Explanation: Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to reuse your existing knowledge of how to interact with DataFrames.
Notice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a <style> tag. This will be a common theme.
Finally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.
Now suppose you wanted to highlight the maximum value in each column.
We can't use .applymap anymore since that operated elementwise.
Instead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.
End of explanation
df.style.\
applymap(color_negative_red).\
apply(highlight_max)
Explanation: In this case the input is a Series, one column at a time.
Notice that the output shape of highlight_max matches the input shape, an array with len(s) items.
We encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.
End of explanation
def highlight_max(data, color='yellow'):
'''
highlight the maximum in a Series or DataFrame
'''
attr = 'background-color: {}'.format(color)
if data.ndim == 1: # Series from .apply(axis=0) or axis=1
is_max = data == data.max()
return [attr if v else '' for v in is_max]
else: # from .apply(axis=None)
is_max = data == data.max().max()
return pd.DataFrame(np.where(is_max, attr, ''),
index=data.index, columns=data.columns)
Explanation: Above we used Styler.apply to pass in each column one at a time.
<span style="background-color: #DEDEBE">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style="background-color: #DEDEBE">DataFrame.apply</code>. Internally, <code style="background-color: #DEDEBE">Styler.apply</code> uses <code style="background-color: #DEDEBE">DataFrame.apply</code> so the result should be the same.</span>
What if you wanted to highlight just the maximum value in the entire table?
Use .apply(function, axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let's try that next.
We'll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from .apply(axis=None)). We'll also allow the color to be adjustable, to demonstrate that .apply, and .applymap pass along keyword arguments.
End of explanation
df.style.apply(highlight_max, color='darkorange', axis=None)
Explanation: When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.
End of explanation
df.style.apply(highlight_max, subset=['B', 'C', 'D'])
Explanation: Building Styles Summary
Style functions should return strings with one or more CSS attribute: value delimited by semicolons. Use
Styler.applymap(func) for elementwise styles
Styler.apply(func, axis=0) for columnwise styles
Styler.apply(func, axis=1) for rowwise styles
Styler.apply(func, axis=None) for tablewise styles
And crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.shape.
Finer control: slicing
Both Styler.apply, and Styler.applymap accept a subset keyword.
This allows you to apply styles to specific rows or columns, without having to code that logic into your style function.
The value passed to subset behaves similar to slicing a DataFrame.
A scalar is treated as a column label
A list (or series or numpy array)
A tuple is treated as (row_indexer, column_indexer)
Consider using pd.IndexSlice to construct the tuple for the last one.
End of explanation
df.style.applymap(color_negative_red,
subset=pd.IndexSlice[2:5, ['B', 'D']])
Explanation: For row and column slicing, any valid indexer to .loc will work.
End of explanation
df.style.format("{:.2%}")
Explanation: Only label-based slicing is supported right now, not positional.
If your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.
python
my_func2 = functools.partial(my_func, subset=42)
Finer Control: Display Values
We distinguish the display value from the actual value in Styler.
To control the display value, the text is printed in each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a single value and returns a string.
End of explanation
df.style.format({'B': "{:0<4.0f}", 'D': '{:+.2f}'})
Explanation: Use a dictionary to format specific columns.
End of explanation
df.style.format({"B": lambda x: "±{:.2f}".format(abs(x))})
Explanation: Or pass in a callable (or dictionary of callables) for more flexible handling.
End of explanation
df.style.format("{:.2%}", na_rep="-")
Explanation: You can format the text displayed for missing values by na_rep.
End of explanation
df.style.highlight_max().format(None, na_rep="-")
Explanation: These formatting techniques can be used in combination with styling.
End of explanation
df.style.highlight_null(null_color='red')
Explanation: Builtin styles
Finally, we expect certain styling functions to be common enough that we've included a few "built-in" to the Styler, so you don't have to write them yourself.
End of explanation
import seaborn as sns
cm = sns.light_palette("green", as_cmap=True)
s = df.style.background_gradient(cmap=cm)
s
Explanation: You can create "heatmaps" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.
End of explanation
# Uses the full color range
df.loc[:4].style.background_gradient(cmap='viridis')
# Compress the color range
(df.loc[:4]
.style
.background_gradient(cmap='viridis', low=.5, high=0)
.highlight_null('red'))
Explanation: Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.
End of explanation
df.style.highlight_max(axis=0)
Explanation: There's also .highlight_min and .highlight_max.
End of explanation
df.style.set_properties(**{'background-color': 'black',
'color': 'lawngreen',
'border-color': 'white'})
Explanation: Use Styler.set_properties when the style doesn't actually depend on the values.
End of explanation
df.style.bar(subset=['A', 'B'], color='#d65f5f')
Explanation: Bar charts
You can include "bar charts" in your DataFrame.
End of explanation
df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])
Explanation: New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive].
Here's how you can change the above with the new align='mid' option:
End of explanation
import pandas as pd
from IPython.display import HTML
# Test series
test1 = pd.Series([-100,-60,-30,-20], name='All Negative')
test2 = pd.Series([10,20,50,100], name='All Positive')
test3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')
head =
<table>
<thead>
<th>Align</th>
<th>All Negative</th>
<th>All Positive</th>
<th>Both Neg and Pos</th>
</thead>
</tbody>
aligns = ['left','zero','mid']
for align in aligns:
row = "<tr><th>{}</th>".format(align)
for series in [test1,test2,test3]:
s = series.copy()
s.name=''
row += "<td>{}</td>".format(s.to_frame().style.bar(align=align,
color=['#d65f5f', '#5fba7d'],
width=100).render()) #testn['width']
row += '</tr>'
head += row
head+=
</tbody>
</table>
HTML(head)
Explanation: The following example aims to give a highlight of the behavior of the new align options:
End of explanation
df2 = -df
style1 = df.style.applymap(color_negative_red)
style1
style2 = df2.style
style2.use(style1.export())
style2
Explanation: Sharing styles
Say you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set
End of explanation
with pd.option_context('display.precision', 2):
html = (df.style
.applymap(color_negative_red)
.apply(highlight_max))
html
Explanation: Notice that you're able to share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.
Other Options
You've seen a few methods for data-driven styling.
Styler also provides a few other options for styles that don't depend on the data.
precision
captions
table-wide styles
missing values representation
hiding the index or columns
Each of these can be specified in two ways:
A keyword argument to Styler.__init__
A call to one of the .set_ or .hide_ methods, e.g. .set_caption or .hide_columns
The best method to use depends on the context. Use the Styler constructor when building many styled DataFrames that should all share the same properties. For interactive use, the.set_ and .hide_ methods are more convenient.
Precision
You can control the precision of floats using pandas' regular display.precision option.
End of explanation
df.style\
.applymap(color_negative_red)\
.apply(highlight_max)\
.set_precision(2)
Explanation: Or through a set_precision method.
End of explanation
df.style.set_caption('Colormaps, with a caption.')\
.background_gradient(cmap=cm)
Explanation: Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.
Captions
Regular table captions can be added in a few ways.
End of explanation
from IPython.display import HTML
def hover(hover_color="#ffff99"):
return dict(selector="tr:hover",
props=[("background-color", "%s" % hover_color)])
styles = [
hover(),
dict(selector="th", props=[("font-size", "150%"),
("text-align", "center")]),
dict(selector="caption", props=[("caption-side", "bottom")])
]
html = (df.style.set_table_styles(styles)
.set_caption("Hover to highlight."))
html
Explanation: Table styles
The next option you have are "table styles".
These are styles that apply to the table as a whole, but don't look at the data.
Certain sytlings, including pseudo-selectors like :hover can only be used this way.
End of explanation
(df.style
.set_na_rep("FAIL")
.format(None, na_rep="PASS", subset=["D"])
.highlight_null("yellow"))
Explanation: table_styles should be a list of dictionaries.
Each dictionary should have the selector and props keys.
The value for selector should be a valid CSS selector.
Recall that all the styles are already attached to an id, unique to
each Styler. This selector is in addition to that id.
The value for props should be a list of tuples of ('attribute', 'value').
table_styles are extremely flexible, but not as fun to type out by hand.
We hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.
Missing values
You can control the default missing values representation for the entire table through set_na_rep method.
End of explanation
df.style.hide_index()
df.style.hide_columns(['C','D'])
Explanation: Hiding the Index or Columns
The index can be hidden from rendering by calling Styler.hide_index. Columns can be hidden from rendering by calling Styler.hide_columns and passing in the name of a column, or a slice of columns.
End of explanation
from IPython.html import widgets
@widgets.interact
def f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):
return df.style.background_gradient(
cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,
as_cmap=True)
)
def magnify():
return [dict(selector="th",
props=[("font-size", "4pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
np.random.seed(25)
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
bigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()
bigdf.style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '1pt'})\
.set_caption("Hover to magnify")\
.set_precision(2)\
.set_table_styles(magnify())
Explanation: CSS classes
Certain CSS classes are attached to cells.
Index and Column names include index_name and level<k> where k is its level in a MultiIndex
Index label cells include
row_heading
row<n> where n is the numeric position of the row
level<k> where k is the level in a MultiIndex
Column label cells include
col_heading
col<n> where n is the numeric position of the column
level<k> where k is the level in a MultiIndex
Blank cells include blank
Data cells include data
Limitations
DataFrame only (use Series.to_frame().style)
The index and columns must be unique
No large repr, and performance isn't great; this is intended for summary DataFrames
You can only style the values, not the index or columns
You can only apply styles, you can't insert new HTML entities
Some of these will be addressed in the future.
Terms
Style function: a function that's passed into Styler.apply or Styler.applymap and returns values like 'css attribute: value'
Builtin style functions: style functions that are methods on Styler
table style: a dictionary with the two keys selector and props. selector is the CSS selector that props will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.
Fun stuff
Here are a few interesting examples.
Styler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.
End of explanation
df.style.\
applymap(color_negative_red).\
apply(highlight_max).\
to_excel('styled.xlsx', engine='openpyxl')
Explanation: Export to Excel
New in version 0.20.0
<span style="color: red">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>
Some support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL or XlsxWriter engines. CSS2.2 properties handled include:
background-color
border-style, border-width, border-color and their {top, right, bottom, left variants}
color
font-family
font-style
font-weight
text-align
text-decoration
vertical-align
white-space: nowrap
Only CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.
The following pseudo CSS properties are also available to set excel specific style properties:
number-format
End of explanation
from jinja2 import Environment, ChoiceLoader, FileSystemLoader
from IPython.display import HTML
from pandas.io.formats.style import Styler
Explanation: A screenshot of the output:
Extensibility
The core of pandas is, and will remain, its "high-performance, easy-to-use data structures".
With that in mind, we hope that DataFrame.style accomplishes two goals
Provide an API that is pleasing to use interactively and is "good enough" for many tasks
Provide the foundations for dedicated libraries to build on
If you build a great library on top of this, let us know and we'll link to it.
Subclassing
If the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.
We'll show an example of extending the default template to insert a custom header before each table.
End of explanation
with open("templates/myhtml.tpl") as f:
print(f.read())
Explanation: We'll use the following template:
End of explanation
class MyStyler(Styler):
env = Environment(
loader=ChoiceLoader([
FileSystemLoader("templates"), # contains ours
Styler.loader, # the default
])
)
template = env.get_template("myhtml.tpl")
Explanation: Now that we've created a template, we need to set up a subclass of Styler that
knows about it.
End of explanation
MyStyler(df)
Explanation: Notice that we include the original loader in our environment's loader.
That's because we extend the original template, so the Jinja environment needs
to be able to find it.
Now we can use that custom styler. It's __init__ takes a DataFrame.
End of explanation
HTML(MyStyler(df).render(table_title="Extending Example"))
Explanation: Our custom template accepts a table_title keyword. We can provide the value in the .render method.
End of explanation
EasyStyler = Styler.from_custom_template("templates", "myhtml.tpl")
EasyStyler(df)
Explanation: For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.
End of explanation
with open("templates/template_structure.html") as f:
structure = f.read()
HTML(structure)
Explanation: Here's the template structure:
End of explanation
# Hack to get the same style in the notebook as the
# main site. This is hidden in the docs.
from IPython.display import HTML
with open("themes/nature_with_gtoc/static/nature.css_t") as f:
css = f.read()
HTML('<style>{}</style>'.format(css))
Explanation: See the template in the GitHub repo for more details.
End of explanation |
14,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPU accelerated tensorflow
Author
Step1: Part 02 -- Manually specifying devices for running Tensorflow code
Step2: Setting up Tensorflow to run on CPU
Step3: Setting up Tensorflow to run on GPU
Step4: Part 03 -- Benchmarking Tensorflow GPU vs CPU | Python Code:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
Explanation: GPU accelerated tensorflow
Author:
Dr. Rahul Remanan
This code notebook is an introduction to GPU accelerated tensorflow.
Part 01 -- Checking Tensorflow GPU visibility
End of explanation
import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
Explanation: Part 02 -- Manually specifying devices for running Tensorflow code
End of explanation
# Creates a graph.
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
Explanation: Setting up Tensorflow to run on CPU
End of explanation
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
with tf.Session() as sess:
print (sess.run(c))
Explanation: Setting up Tensorflow to run on GPU
End of explanation
import time
import tensorflow as tf
def tf_benchmark(a=None, shape_a=None, b=None, shape_b=None, enable_GPU = False):
device = 'cpu'
if enable_GPU:
device = 'gpu'
start_time = time.time()
with tf.device('/{}:0'.format(device)):
a = tf.constant(a, shape=shape_a, name = 'a')
b = tf.constant(b, shape=shape_b, name='b')
c = tf.matmul(a, b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
output = sess.run(c)
execution_time = time.time()-start_time
return {'output': output, 'execution time': execution_time}
a=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
b=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
shape_a=[2, 3]
shape_b=[3,2]
CPU_exec_time = tf_benchmark(a=a,
b=b,
shape_a=shape_a,
shape_b=shape_b,
enable_GPU=False)
GPU_exec_time = tf_benchmark(a=a,
b=b,
shape_a=shape_a,
shape_b=shape_b,
enable_GPU=True)
print ("CPU execution time: {}".format(CPU_exec_time['execution time']))
print ("GPU execution time: {}".format(GPU_exec_time['execution time']))
print ("GPU vs CPU execution time delta: {}".format(GPU_exec_time['execution time'] - CPU_exec_time['execution time']))
print ("GPU acceleration factor: {}".format(CPU_exec_time['execution time'] / GPU_exec_time['execution time']))
Explanation: Part 03 -- Benchmarking Tensorflow GPU vs CPU
End of explanation |
14,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence Engineer Nanodegree - Probabilistic Models
Project
Step1: The frame represented by video 98, frame 1 is shown here
Step2: Try it!
Step3: Build the training set
Now that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set
Step4: The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion).
Step5: More feature sets
So far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups.
Step6: To select a mean that matches by speaker, use the pandas map method
Step7: Try it!
Step8: <a id='part1_submission'></a>
Features Implementation Submission
Implement four feature sets and answer the question that follows.
- normalized Cartesian coordinates
- use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length
polar coordinates
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the $0$ to $2\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.
delta difference
as described in Thad's lecture, use the difference in values between one frame and the next frames as features
pandas diff method and fillna method will be helpful for this one
custom features
These are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with!
Some ideas to get you started
Step9: Question 1
Step10: <a id='part2_tutorial'></a>
PART 2
Step11: The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method.
Step12: Try it!
Experiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values.
Step14: Visualize the hidden states
We can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are "better" than others? How can you tell? We would like to hear what you think in the classroom online.
Step15: ModelSelector class
Review the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook
Step16: Cross-validation folds
If we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into "folds" and rotate which fold is left out of training. The "left out" fold scored. This gives us a proxy method of finding the best model to use on "unseen data". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique.
Step17: Tip
Step18: Question 2
Step19: <a id='part3_tutorial'></a>
PART 3
Step20: Load the test set
The build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences
Step21: <a id='part3_submission'></a>
Recognizer Implementation Submission
For the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 .
Tip
Step22: Question 3
Step23: <a id='part4_info'></a>
PART 4 | Python Code:
import numpy as np
import pandas as pd
from asl_data import AslDb
asl = AslDb() # initializes the database
asl.df.head() # displays the first five rows of the asl database, indexed by video and frame
asl.df.ix[98,1] # look at the data available for an individual frame
Explanation: Artificial Intelligence Engineer Nanodegree - Probabilistic Models
Project: Sign Language Recognition System
Introduction
Part 1 Feature Selection
Tutorial
Features Submission
Features Unittest
Part 2 Train the models
Tutorial
Model Selection Score Submission
Model Score Unittest
Part 3 Build a Recognizer
Tutorial
Recognizer Submission
Recognizer Unittest
Part 4 (OPTIONAL) Improve the WER with Language Models
<a id='intro'></a>
Introduction
The overall goal of this project is to build a word recognizer for American Sign Language video sequences, demonstrating the power of probabalistic models. In particular, this project employs hidden Markov models (HMM's) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the RWTH-BOSTON-104 Database). In this video, the right-hand x and y locations are plotted as the speaker signs the sentence.
The raw data, train, and test sets are pre-defined. You will derive a variety of feature sets (explored in Part 1), as well as implement three different model selection criterion to determine the optimal number of hidden states for each word model (explored in Part 2). Finally, in Part 3 you will implement the recognizer and compare the effects the different combinations of feature sets and model selection criteria.
At the end of each Part, complete the submission cells with implementations, answer all questions, and pass the unit tests. Then submit the completed notebook for review!
<a id='part1_tutorial'></a>
PART 1: Data
Features Tutorial
Load the initial database
A data handler designed for this database is provided in the student codebase as the AslDb class in the asl_data module. This handler creates the initial pandas dataframe from the corpus of data included in the data directory as well as dictionaries suitable for extracting data in a format friendly to the hmmlearn library. We'll use those to create models in Part 2.
To start, let's set up the initial database and select an example set of features for the training set. At the end of Part 1, you will create additional feature sets for experimentation.
End of explanation
asl.df['grnd-ry'] = asl.df['right-y'] - asl.df['nose-y']
asl.df.head() # the new feature 'grnd-ry' is now in the frames dictionary
Explanation: The frame represented by video 98, frame 1 is shown here:
Feature selection for training the model
The objective of feature selection when training a model is to choose the most relevant variables while keeping the model as simple as possible, thus reducing training time. We can use the raw features already provided or derive our own and add columns to the pandas dataframe asl.df for selection. As an example, in the next cell a feature named 'grnd-ry' is added. This feature is the difference between the right-hand y value and the nose y value, which serves as the "ground" right y value.
End of explanation
from asl_utils import test_features_tryit
# TODO add df columns for 'grnd-rx', 'grnd-ly', 'grnd-lx' representing differences between hand and nose locations
asl.df['grnd-rx'] = asl.df['right-x'] - asl.df['nose-x']
asl.df['grnd-ly'] = asl.df['left-y'] - asl.df['nose-y']
asl.df['grnd-lx'] = asl.df['left-x'] - asl.df['nose-x']
# test the code
test_features_tryit(asl)
# collect the features into a list
features_ground = ['grnd-rx','grnd-ry','grnd-lx','grnd-ly']
#show a single set of features for a given (video, frame) tuple
[asl.df.ix[98,1][v] for v in features_ground]
Explanation: Try it!
End of explanation
training = asl.build_training(features_ground)
print("Training words: {}".format(training.words))
Explanation: Build the training set
Now that we have a feature list defined, we can pass that list to the build_training method to collect the features for all the words in the training set. Each word in the training set has multiple examples from various videos. Below we can see the unique words that have been loaded into the training set:
End of explanation
training.get_word_Xlengths('CHOCOLATE')
Explanation: The training data in training is an object of class WordsData defined in the asl_data module. in addition to the words list, data can be accessed with the get_all_sequences, get_all_Xlengths, get_word_sequences, and get_word_Xlengths methods. We need the get_word_Xlengths method to train multiple sequences with the hmmlearn library. In the following example, notice that there are two lists; the first is a concatenation of all the sequences(the X portion) and the second is a list of the sequence lengths(the Lengths portion).
End of explanation
df_means = asl.df.groupby('speaker').mean()
df_means
Explanation: More feature sets
So far we have a simple feature set that is enough to get started modeling. However, we might get better results if we manipulate the raw values a bit more, so we will go ahead and set up some other options now for experimentation later. For example, we could normalize each speaker's range of motion with grouped statistics using Pandas stats functions and pandas groupby. Below is an example for finding the means of all speaker subgroups.
End of explanation
asl.df['left-x-mean']= asl.df['speaker'].map(df_means['left-x'])
asl.df.head()
Explanation: To select a mean that matches by speaker, use the pandas map method:
End of explanation
from asl_utils import test_std_tryit
# TODO Create a dataframe named `df_std` with standard deviations grouped by speaker
df_std = asl.df.groupby('speaker').std()
# test the code
test_std_tryit(df_std)
Explanation: Try it!
End of explanation
# TODO add features for normalized by speaker values of left, right, x, y
# Name these 'norm-rx', 'norm-ry', 'norm-lx', and 'norm-ly'
# using Z-score scaling (X-Xmean)/Xstd
features_norm = ['norm-rx', 'norm-ry', 'norm-lx','norm-ly']
# Mean matched by speaker
asl.df['right-x-mean']= asl.df['speaker'].map(df_means['right-x'])
asl.df['right-y-mean']= asl.df['speaker'].map(df_means['right-y'])
asl.df['left-x-mean']= asl.df['speaker'].map(df_means['left-x'])
asl.df['left-y-mean']= asl.df['speaker'].map(df_means['left-y'])
# Std dev matched by speaker
asl.df['right-x-std']= asl.df['speaker'].map(df_std['right-x'])
asl.df['right-y-std']= asl.df['speaker'].map(df_std['right-y'])
asl.df['left-x-std']= asl.df['speaker'].map(df_std['left-x'])
asl.df['left-y-std']= asl.df['speaker'].map(df_std['left-y'])
# Add the actual normalized scores
asl.df['norm-rx'] = (asl.df['right-x'] - asl.df['right-x-mean']) / asl.df['right-x-std']
asl.df['norm-ry'] = (asl.df['right-y'] - asl.df['right-y-mean']) / asl.df['right-y-std']
asl.df['norm-lx'] = (asl.df['left-x'] - asl.df['left-x-mean']) / asl.df['left-x-std']
asl.df['norm-ly'] = (asl.df['left-y'] - asl.df['left-y-mean']) / asl.df['left-y-std']
# TODO add features for polar coordinate values where the nose is the origin
# Name these 'polar-rr', 'polar-rtheta', 'polar-lr', and 'polar-ltheta'
# Note that 'polar-rr' and 'polar-rtheta' refer to the radius and angle
'''
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the 00 to 2π2π discontinuity
to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from 00 to 2π2π
occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results.
By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally
used in signing.
'''
features_polar = ['polar-rr', 'polar-rtheta', 'polar-lr', 'polar-ltheta']
asl.df['polar-rr'] = np.sqrt(asl.df['grnd-rx']**2 + asl.df['grnd-ry']**2)
asl.df['polar-rtheta'] = np.arctan2(asl.df['grnd-rx'], asl.df['grnd-ry'])
asl.df['polar-lr'] = np.sqrt(asl.df['grnd-lx']**2 + asl.df['grnd-ly']**2)
asl.df['polar-ltheta'] = np.arctan2(asl.df['grnd-lx'], asl.df['grnd-ly'])
# TODO add features for left, right, x, y differences by one time step, i.e. the "delta" values discussed in the lecture
# Name these 'delta-rx', 'delta-ry', 'delta-lx', and 'delta-ly'
features_delta = ['delta-rx', 'delta-ry', 'delta-lx', 'delta-ly']
asl.df['delta-rx'] = asl.df['grnd-rx'].diff()
asl.df['delta-ry'] = asl.df['grnd-ry'].diff()
asl.df['delta-lx'] = asl.df['grnd-lx'].diff()
asl.df['delta-ly'] = asl.df['grnd-ly'].diff()
# Fill with 0 values
asl.df = asl.df.fillna(0)
# TODO add features of your own design, which may be a combination of the above or something else
# Name these whatever you would like
# TODO define a list named 'features_custom' for building the training set
# Normalize polar coordinates
features_polar_norm = ['pnorm-rx', 'pnorm-ry', 'pnorm-lx','pnorm-ly']
df_means = asl.df.groupby('speaker').mean()
df_std = asl.df.groupby('speaker').std()
# Mean matched by speaker
asl.df['polar-rr-mean']= asl.df['speaker'].map(df_means['polar-rr'])
asl.df['polar-rtheta-mean']= asl.df['speaker'].map(df_means['polar-rtheta'])
asl.df['polar-lr-mean']= asl.df['speaker'].map(df_means['polar-lr'])
asl.df['polar-ltheta-mean']= asl.df['speaker'].map(df_means['polar-ltheta'])
# Std dev matched by speaker
asl.df['polar-rr-std']= asl.df['speaker'].map(df_std['polar-rr'])
asl.df['polar-rtheta-std']= asl.df['speaker'].map(df_std['polar-rtheta'])
asl.df['polar-lr-std']= asl.df['speaker'].map(df_std['polar-lr'])
asl.df['polar-ltheta-std']= asl.df['speaker'].map(df_std['polar-ltheta'])
# Add the actual normalized scores
asl.df['pnorm-rx'] = (asl.df['polar-rr'] - asl.df['polar-rr-mean']) / asl.df['polar-rr-std']
asl.df['pnorm-ry'] = (asl.df['polar-rtheta'] - asl.df['polar-rtheta-mean']) / asl.df['polar-rtheta-std']
asl.df['pnorm-lx'] = (asl.df['polar-lr'] - asl.df['polar-lr-mean']) / asl.df['polar-lr-std']
asl.df['pnorm-ly'] = (asl.df['polar-ltheta'] - asl.df['polar-ltheta-mean']) / asl.df['polar-ltheta-std']
Explanation: <a id='part1_submission'></a>
Features Implementation Submission
Implement four feature sets and answer the question that follows.
- normalized Cartesian coordinates
- use mean and standard deviation statistics and the standard score equation to account for speakers with different heights and arm length
polar coordinates
calculate polar coordinates with Cartesian to polar equations
use the np.arctan2 function and swap the x and y axes to move the $0$ to $2\pi$ discontinuity to 12 o'clock instead of 3 o'clock; in other words, the normal break in radians value from $0$ to $2\pi$ occurs directly to the left of the speaker's nose, which may be in the signing area and interfere with results. By swapping the x and y axes, that discontinuity move to directly above the speaker's head, an area not generally used in signing.
delta difference
as described in Thad's lecture, use the difference in values between one frame and the next frames as features
pandas diff method and fillna method will be helpful for this one
custom features
These are your own design; combine techniques used above or come up with something else entirely. We look forward to seeing what you come up with!
Some ideas to get you started:
normalize using a feature scaling equation
normalize the polar coordinates
adding additional deltas
End of explanation
import unittest
# import numpy as np
class TestFeatures(unittest.TestCase):
def test_features_ground(self):
sample = (asl.df.ix[98, 1][features_ground]).tolist()
self.assertEqual(sample, [9, 113, -12, 119])
def test_features_norm(self):
sample = (asl.df.ix[98, 1][features_norm]).tolist()
np.testing.assert_almost_equal(sample, [ 1.153, 1.663, -0.891, 0.742], 3)
def test_features_polar(self):
sample = (asl.df.ix[98,1][features_polar]).tolist()
np.testing.assert_almost_equal(sample, [113.3578, 0.0794, 119.603, -0.1005], 3)
def test_features_delta(self):
sample = (asl.df.ix[98, 0][features_delta]).tolist()
self.assertEqual(sample, [0, 0, 0, 0])
sample = (asl.df.ix[98, 18][features_delta]).tolist()
self.assertTrue(sample in [[-16, -5, -2, 4], [-14, -9, 0, 0]], "Sample value found was {}".format(sample))
suite = unittest.TestLoader().loadTestsFromModule(TestFeatures())
unittest.TextTestRunner().run(suite)
Explanation: Question 1: What custom features did you choose for the features_custom set and why?
Answer 1: I chose to normalize the polar coordinate values where the nose is the origin. This ensures that the training data is independent from the speaker's height or body proportions.
<a id='part1_test'></a>
Features Unit Testing
Run the following unit tests as a sanity check on the defined "ground", "norm", "polar", and 'delta"
feature sets. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
import warnings
from hmmlearn.hmm import GaussianHMM
def train_a_word(word, num_hidden_states, features):
warnings.filterwarnings("ignore", category=DeprecationWarning)
training = asl.build_training(features)
X, lengths = training.get_word_Xlengths(word)
model = GaussianHMM(n_components=num_hidden_states, n_iter=1000).fit(X, lengths)
logL = model.score(X, lengths)
return model, logL
demoword = 'BOOK'
model, logL = train_a_word(demoword, 3, features_ground)
print("Number of states trained in model for {} is {}".format(demoword, model.n_components))
print("logL = {}".format(logL))
Explanation: <a id='part2_tutorial'></a>
PART 2: Model Selection
Model Selection Tutorial
The objective of Model Selection is to tune the number of states for each word HMM prior to testing on unseen data. In this section you will explore three methods:
- Log likelihood using cross-validation folds (CV)
- Bayesian Information Criterion (BIC)
- Discriminative Information Criterion (DIC)
Train a single word
Now that we have built a training set with sequence data, we can "train" models for each word. As a simple starting example, we train a single word using Gaussian hidden Markov models (HMM). By using the fit method during training, the Baum-Welch Expectation-Maximization (EM) algorithm is invoked iteratively to find the best estimate for the model for the number of hidden states specified from a group of sample seequences. For this example, we assume the correct number of hidden states is 3, but that is just a guess. How do we know what the "best" number of states for training is? We will need to find some model selection technique to choose the best parameter.
End of explanation
def show_model_stats(word, model):
print("Number of states trained in model for {} is {}".format(word, model.n_components))
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
for i in range(model.n_components): # for each hidden state
print("hidden state #{}".format(i))
print("mean = ", model.means_[i])
print("variance = ", variance[i])
print()
show_model_stats(demoword, model)
Explanation: The HMM model has been trained and information can be pulled from the model, including means and variances for each feature and hidden state. The log likelihood for any individual sample or group of samples can also be calculated with the score method.
End of explanation
my_testword = 'CHOCOLATE'
model, logL = train_a_word(my_testword, 3, features_ground) # Experiment here with different parameters
show_model_stats(my_testword, model)
print("logL = {}".format(logL))
Explanation: Try it!
Experiment by changing the feature set, word, and/or num_hidden_states values in the next cell to see changes in values.
End of explanation
%matplotlib inline
import math
from matplotlib import (cm, pyplot as plt, mlab)
def visualize(word, model):
visualize the input model for a particular word
variance=np.array([np.diag(model.covars_[i]) for i in range(model.n_components)])
figures = []
for parm_idx in range(len(model.means_[0])):
xmin = int(min(model.means_[:,parm_idx]) - max(variance[:,parm_idx]))
xmax = int(max(model.means_[:,parm_idx]) + max(variance[:,parm_idx]))
fig, axs = plt.subplots(model.n_components, sharex=True, sharey=False)
colours = cm.rainbow(np.linspace(0, 1, model.n_components))
for i, (ax, colour) in enumerate(zip(axs, colours)):
x = np.linspace(xmin, xmax, 100)
mu = model.means_[i,parm_idx]
sigma = math.sqrt(np.diag(model.covars_[i])[parm_idx])
ax.plot(x, mlab.normpdf(x, mu, sigma), c=colour)
ax.set_title("{} feature {} hidden state #{}".format(word, parm_idx, i))
ax.grid(True)
figures.append(plt)
for p in figures:
p.show()
visualize(my_testword, model)
Explanation: Visualize the hidden states
We can plot the means and variances for each state and feature. Try varying the number of states trained for the HMM model and examine the variances. Are there some models that are "better" than others? How can you tell? We would like to hear what you think in the classroom online.
End of explanation
from my_model_selectors import SelectorConstant
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
word = 'VEGETABLE' # Experiment here with different words
model = SelectorConstant(training.get_all_sequences(), training.get_all_Xlengths(), word, n_constant=3).select()
print("Number of states trained in model for {} is {}".format(word, model.n_components))
Explanation: ModelSelector class
Review the ModelSelector class from the codebase found in the my_model_selectors.py module. It is designed to be a strategy pattern for choosing different model selectors. For the project submission in this section, subclass SelectorModel to implement the following model selectors. In other words, you will write your own classes/functions in the my_model_selectors.py module and run them from this notebook:
SelectorCV: Log likelihood with CV
SelectorBIC: BIC
SelectorDIC: DIC
You will train each word in the training set with a range of values for the number of hidden states, and then score these alternatives with the model selector, choosing the "best" according to each strategy. The simple case of training with a constant value for n_components can be called using the provided SelectorConstant subclass as follow:
End of explanation
from sklearn.model_selection import KFold
training = asl.build_training(features_ground) # Experiment here with different feature sets
word = 'VEGETABLE' # Experiment here with different words
word_sequences = training.get_word_sequences(word)
split_method = KFold()
for cv_train_idx, cv_test_idx in split_method.split(word_sequences):
print("Train fold indices:{} Test fold indices:{}".format(cv_train_idx, cv_test_idx)) # view indices of the folds
Explanation: Cross-validation folds
If we simply score the model with the Log Likelihood calculated from the feature sequences it has been trained on, we should expect that more complex models will have higher likelihoods. However, that doesn't tell us which would have a better likelihood score on unseen data. The model will likely be overfit as complexity is added. To estimate which topology model is better using only the training data, we can compare scores using cross-validation. One technique for cross-validation is to break the training set into "folds" and rotate which fold is left out of training. The "left out" fold scored. This gives us a proxy method of finding the best model to use on "unseen data". In the following example, a set of word sequences is broken into three folds using the scikit-learn Kfold class object. When you implement SelectorCV, you will use this technique.
End of explanation
words_to_train = ['FISH', 'BOOK', 'VEGETABLE', 'FUTURE', 'JOHN']
import timeit
# TODO: Implement SelectorCV in my_model_selector.py
%load_ext autoreload
%autoreload 2
from my_model_selectors import SelectorCV
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorCV(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorBIC in module my_model_selectors.py
%load_ext autoreload
%autoreload 2
from my_model_selectors import SelectorBIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorBIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
# TODO: Implement SelectorDIC in module my_model_selectors.py
%load_ext autoreload
%autoreload 2
from my_model_selectors import SelectorDIC
training = asl.build_training(features_ground) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
for word in words_to_train:
start = timeit.default_timer()
model = SelectorDIC(sequences, Xlengths, word,
min_n_components=2, max_n_components=15, random_state = 14).select()
end = timeit.default_timer()-start
if model is not None:
print("Training complete for {} with {} states with time {} seconds".format(word, model.n_components, end))
else:
print("Training failed for {}".format(word))
Explanation: Tip: In order to run hmmlearn training using the X,lengths tuples on the new folds, subsets must be combined based on the indices given for the folds. A helper utility has been provided in the asl_utils module named combine_sequences for this purpose.
Scoring models with other criterion
Scoring model topologies with BIC balances fit and complexity within the training set for each word. In the BIC equation, a penalty term penalizes complexity to avoid overfitting, so that it is not necessary to also use cross-validation in the selection process. There are a number of references on the internet for this criterion. These slides include a formula you may find helpful for your implementation.
The advantages of scoring model topologies with DIC over BIC are presented by Alain Biem in this reference (also found here). DIC scores the discriminant ability of a training set for one word against competing words. Instead of a penalty term for complexity, it provides a penalty if model liklihoods for non-matching words are too similar to model likelihoods for the correct word in the word set.
<a id='part2_submission'></a>
Model Selection Implementation Submission
Implement SelectorCV, SelectorBIC, and SelectorDIC classes in the my_model_selectors.py module. Run the selectors on the following five words. Then answer the questions about your results.
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.
End of explanation
from asl_test_model_selectors import TestSelectors
suite = unittest.TestLoader().loadTestsFromModule(TestSelectors())
unittest.TextTestRunner().run(suite)
Explanation: Question 2: Compare and contrast the possible advantages and disadvantages of the various model selectors implemented.
Answer 2:
Cross validation advantages: it creates models that generalize well to an unknown dataset, thus reducing overfitting. This is done by splitting the data and using each fold as validation while the remaining folds form the training set.
Cross validation disadvatages: it needs a big enough dataset.
BIC advantages: penalizes complexity (big number of free parameters) in an effort to combat overfitting.
DIC advantages: DIC discriminates more efficiently between the given words because it makes sure that the there is a big difference between the log likelihood of a word model and the log likelihood of all the other words (using the same model). DIC is better at solving the classification problem.
DIC disadvantages: If the number of words drastically increases, the execution time will also increase due to calculating the log likelihoods combinations for all of the words.
<a id='part2_test'></a>
Model Selector Unit Testing
Run the following unit tests as a sanity check on the implemented model selectors. The test simply looks for valid interfaces but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
# autoreload for automatically reloading changes made in my_model_selectors and my_recognizer
%load_ext autoreload
%autoreload 2
from my_model_selectors import SelectorConstant
def train_all_words(features, model_selector):
training = asl.build_training(features) # Experiment here with different feature sets defined in part 1
sequences = training.get_all_sequences()
Xlengths = training.get_all_Xlengths()
model_dict = {}
for word in training.words:
model = model_selector(sequences, Xlengths, word,
n_constant=3).select()
model_dict[word]=model
return model_dict
models = train_all_words(features_ground, SelectorConstant)
print("Number of word models returned = {}".format(len(models)))
Explanation: <a id='part3_tutorial'></a>
PART 3: Recognizer
The objective of this section is to "put it all together". Using the four feature sets created and the three model selectors, you will experiment with the models and present your results. Instead of training only five specific words as in the previous section, train the entire set with a feature set and model selector strategy.
Recognizer Tutorial
Train the full training set
The following example trains the entire set with the example features_ground and SelectorConstant features and model selector. Use this pattern for you experimentation and final submission cells.
End of explanation
test_set = asl.build_test(features_ground)
print("Number of test set items: {}".format(test_set.num_items))
print("Number of test set sentences: {}".format(len(test_set.sentences_index)))
Explanation: Load the test set
The build_test method in ASLdb is similar to the build_training method already presented, but there are a few differences:
- the object is type SinglesData
- the internal dictionary keys are the index of the test word rather than the word itself
- the getter methods are get_all_sequences, get_all_Xlengths, get_item_sequences and get_item_Xlengths
End of explanation
# TODO implement the recognize method in my_recognizer
%load_ext autoreload
%autoreload 2
from my_recognizer import recognize
from asl_utils import show_errors
# TODO Choose a feature set and model selector
features = features_ground
model_selector = SelectorCV
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = features_polar
model_selector = SelectorBIC
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
# TODO Choose a feature set and model selector
features = features_polar
model_selector = SelectorDIC
# TODO Recognize the test set and display the result with the show_errors method
models = train_all_words(features, model_selector)
test_set = asl.build_test(features)
probabilities, guesses = recognize(models, test_set)
show_errors(guesses, test_set)
Explanation: <a id='part3_submission'></a>
Recognizer Implementation Submission
For the final project submission, students must implement a recognizer following guidance in the my_recognizer.py module. Experiment with the four feature sets and the three model selection methods (that's 12 possible combinations). You can add and remove cells for experimentation or run the recognizers locally in some other way during your experiments, but retain the results for your discussion. For submission, you will provide code cells of only three interesting combinations for your discussion (see questions below). At least one of these should produce a word error rate of less than 60%, i.e. WER < 0.60 .
Tip: The hmmlearn library may not be able to train or score all models. Implement try/except contructs as necessary to eliminate non-viable models from consideration.
End of explanation
from asl_test_recognizer import TestRecognize
suite = unittest.TestLoader().loadTestsFromModule(TestRecognize())
unittest.TextTestRunner().run(suite)
Explanation: Question 3: Summarize the error results from three combinations of features and model selectors. What was the "best" combination and why? What additional information might we use to improve our WER? For more insight on improving WER, take a look at the introduction to Part 4.
Answer 3:
|combination | WER | correct | correct % |
| ------------ |:-------:| :------:|:---------:|
|ground_CV |0.534| 83 | 46.63 |
|ground_BIC |0.551 | 80 | 44.94 |
|ground_DIC | 0.573 | 76 | 42.70 |
|norm_CV | 0.607 | 70 | 39.33 |
|norm_BIC | 0.612 | 69 | 38.76 |
|norm_DIC | 0.596 | 72 | 40.45 |
|polar_CV | 0.562 | 78 | 43.82 |
|polar_BIC |0.545| 81 | 45.51 |
|polar_DIC |0.545| 81 | 45.51 |
|delta_CV | 0.601 | 71 | 39.89 |
|delta_BIC | 0.601 | 71 | 39.89 |
|delta_DIC | 0.624 | 67 | 37.64 |
|polar_norm_CV | 0.629 | 66 | 37.08 |
|polar_norm_BIC| 0.596 | 72 | 40.45 |
|polar_norm_DIC| 0.573 | 76 | 42.70 |
It can be seen from the above figure that the best results were obtained by using ground features in combination with the log likelihood and cross validation model selector. WER was 0.534, with a 46.63% rate of correct guesses.
A good performance was also obtained by using polar coordinate values where the nose is the origin in combination with BIC (WER = 0.545, 45.51% correct guesses) and DIC (WER = 0.545, 45.51% correct guesses).
I would have expected BIC or DIC to generally perform better than CV, due to the fact that we have a small dataset. This was the case only for the polar coordinate features and normalized polar coordinates. However, cross validation has the advantage of creating models that generalize well to an unknown dataset, thus reducing overfitting.
Improving WER can be done by using Language Models. The basic idea is that each word has some probability of occurrence within the set, and some probability that it is adjacent to specific other words. We can use that additional information to make better choices.
<a id='part3_test'></a>
Recognizer Unit Tests
Run the following unit tests as a sanity check on the defined recognizer. The test simply looks for some valid values but is not exhaustive. However, the project should not be submitted if these tests don't pass.
End of explanation
# create a DataFrame of log likelihoods for the test word items
df_probs = pd.DataFrame(data=probabilities)
df_probs.head()
Explanation: <a id='part4_info'></a>
PART 4: (OPTIONAL) Improve the WER with Language Models
We've squeezed just about as much as we can out of the model and still only get about 50% of the words right! Surely we can do better than that. Probability to the rescue again in the form of statistical language models (SLM). The basic idea is that each word has some probability of occurrence within the set, and some probability that it is adjacent to specific other words. We can use that additional information to make better choices.
Additional reading and resources
Introduction to N-grams (Stanford Jurafsky slides)
Speech Recognition Techniques for a Sign Language Recognition System, Philippe Dreuw et al see the improved results of applying LM on this data!
SLM data for this ASL dataset
Optional challenge
The recognizer you implemented in Part 3 is equivalent to a "0-gram" SLM. Improve the WER with the SLM data provided with the data set in the link above using "1-gram", "2-gram", and/or "3-gram" statistics. The probabilities data you've already calculated will be useful and can be turned into a pandas DataFrame if desired (see next cell).
Good luck! Share your results with the class!
End of explanation |
14,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.algo - filtre de Sobel
Le filtre de Sobel est utilisé pour calculer des gradients dans une image. L'image ainsi filtrée révèle les forts contrastes.
Step1: Exercice 1
Step2: Mais avant de pouvoir faire des calculs dessus, il faut pouvoir convertir l'image en un tableau numpy avec la fonction numpy.asarray.
Step3: Une fois les calculs effectués, il faut convertir le tableau numpy en image. On peut par exemple blanchir tout une partie de l'image et l'afficher.
Step4: Et maintenant, il s'agit d'appliquer le filtre de Canny uniforme présenté ci-dessus et d'afficher l'image, soit en utilisant numpy, soit sans numpy en convertissant l'image en liste avec la méthode tolist. On pourra comparer les temps de calcul. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.algo - filtre de Sobel
Le filtre de Sobel est utilisé pour calculer des gradients dans une image. L'image ainsi filtrée révèle les forts contrastes.
End of explanation
from pyquickhelper.loghelper import noLOG
from pyensae.datasource import download_data
f = download_data("python.png", url="http://imgs.xkcd.com/comics/")
from IPython.display import Image
Image("python.png")
Explanation: Exercice 1 : application d'un filtre
Le filtre de Sobel est un filtre qu'on applique à une image pour calculer le gradient d'une image afin de déterminer les contours qui s'y trouve. Le filtre de Canny permet de flouter une image. Dans un premier temps, on cherchera à appliquer un filtre 3x3 :
$\left( \begin{array}{ccc} 1&1&1 \ 1&1&1 \ 1&1&1 \end{array} \right)$
Qu'on applique au voisinage 3x3 du pixel $p_5$ :
$\left( \begin{array}{ccc} p_1&p_2&p_3 \ p_4&p_5&p_6 \ p_7&p_8&p_9 \end{array} \right)$
Après l'application du filtre à ce pixel, le résultat devient :
$\left( \begin{array}{ccc} ?&?&? \ ?& \sum_{i=1}^9 p_i &? \ ?&?&? \end{array} \right)$
On veut maintenant appliquer ce filtre sur l'image suivante :
End of explanation
import PIL
import PIL.Image
im = PIL.Image.open("python.png")
from PIL.ImageDraw import Draw
import numpy
tab = numpy.asarray(im).copy()
tab.flags.writeable = True # afin de pouvoir modifier l'image
"dimension",tab.shape, " type", type(tab[0,0])
Explanation: Mais avant de pouvoir faire des calculs dessus, il faut pouvoir convertir l'image en un tableau numpy avec la fonction numpy.asarray.
End of explanation
tab[100:300,200:400] = 255
im2 = PIL.Image.fromarray(numpy.uint8(tab))
im2.save("python_white.png")
Image("python_white.png")
Explanation: Une fois les calculs effectués, il faut convertir le tableau numpy en image. On peut par exemple blanchir tout une partie de l'image et l'afficher.
End of explanation
l = tab.tolist()
len(l),len(l[0])
Explanation: Et maintenant, il s'agit d'appliquer le filtre de Canny uniforme présenté ci-dessus et d'afficher l'image, soit en utilisant numpy, soit sans numpy en convertissant l'image en liste avec la méthode tolist. On pourra comparer les temps de calcul.
End of explanation |
14,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: DWD
Source ID: MPI-ESM-1-2-HR
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
14,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression test suite
Step1: The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
(I) $N_{12}$ = k_N $\int _{m1}^{m2} m^{-2.35} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12}$ = k_N $\int _{m1}^{m2} m^{-1.35} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived
Step2: The total number of stars $N_{tot}$ is then
Step3: Distinguish between 2 sources (AGB,massive)
Step4: Using the mass boundaries chosen in the yield tables
Step5: Compare final yields
Step6: Plotting
Step7: Simulation results compared with semi-analytical calculations with C,N,O.
Distinguish between all 3 sources (AGB,massive AND SNIA)
DTD taken from Maoz
Step8: Test of parameter transitionmass
default (above) is 8Msun; needs to chosen so it agrees with yield input!
Step9: Check of the exclude_masses parameter
Default is exclude_masses=[32.,60.] because both can be only used in SSPs of solar Z and in no continous SFR simulations.
This test requires to check the output and see if there is any 6M or 7Msun yield taken.
Step10: For case where 3Msun excluded, which is low-mass with C, the boundary (3.5Msun) changes to 3Msun and hence N-14 is ejected in lower-mass stars.
Step11: With transitionmass and exclude_mass | Python Code:
#from imp import *
#s=load_source('sygma','/home/nugrid/nugrid/SYGMA/SYGMA_online/SYGMA_dev/sygma.py')
%pylab nbagg
import sygma as s
reload(s)
s.__file__
from scipy.integrate import quad
from scipy.interpolate import UnivariateSpline
#import matplotlib.pyplot as plt
#%matplotlib inline
import numpy as np
#import mpld3
#mpld3.enable_notebook()
Explanation: Regression test suite: Test of basic SSP GCE features
Test of SSP with artificial yields of C,N,O + Ni provided in tables.
C12 only in low-masss stars (up to 3Msun).
N14 only in intermediate mass stars (up to 7Msun).
O16 only in massive stars.
N-58 only in SNIa.
Each star produces only 0.1Msun of yields.
Focus are basic GCE features.
You can find the documentation <a href="doc/sygma.html">here</a>.
Results:
$\odot$ Distinguished final ISM from different sources (low mass, massive AGB, massive stars, SN1a)
$\odot$ Evolution of different sources
$\odot$ Check of transition mass
$\odot$ Check of the exclude_masses parameter
$\odot$ IMPORTANT: Change of SNIa (time) contribution when changing the mass interval! Vogelsberger SNIa does not allow to only partly include SNIa contribution
End of explanation
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
Explanation: The IMF allows to calculate the number of stars $N_{12}$ in the mass interval [m1,m2] with
(I) $N_{12}$ = k_N $\int _{m1}^{m2} m^{-2.35} dm$
Where k_N is the normalization constant. It can be derived from the total amount of mass of the system $M_{tot}$
since the total mass $M_{12}$ in the mass interval above can be estimated with
(II) $M_{12}$ = k_N $\int _{m1}^{m2} m^{-1.35} dm$
With a total mass interval of [1,30] and $M_{tot}=1e11$ the $k_N$ can be derived:
$1e11 = k_N/0.35 * (1^{-0.35} - 30^{-0.35})$
End of explanation
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35)
Explanation: The total number of stars $N_{tot}$ is then:
End of explanation
s1=s.sygma(iolevel=0,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=True,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
print s1.history.isotopes
Yield_lagb_sim=s1.history.ism_iso_yield[-1][0]
Yield_magb_sim=s1.history.ism_iso_yield[-1][1]
Yield_massive_sim=s1.history.ism_iso_yield[-1][2]
Yield_sn1a_sim=s1.history.ism_iso_yield[-1][3]
Explanation: Distinguish between 2 sources (AGB,massive)
End of explanation
N_lagb=k_N/1.35 * (1**-1.35 - 3.5**-1.35)
Yield_lagb=0.1*N_lagb
N_magb=k_N/1.35 * (3.5**-1.35 - 8.**-1.35)
Yield_magb=0.1*N_magb
N_massive=k_N/1.35 * (8.**-1.35 - 30**-1.35)
Yield_massive=0.1*N_massive
Explanation: Using the mass boundaries chosen in the yield tables:
low mass AGB: till 4 [1,3.5]
massive AGB : till 8 [3.5,8] #Different because M12 star is missing in set1.2
massive stars till 30 [8,30]
End of explanation
print 'Should be 1:',Yield_lagb_sim/Yield_lagb
print 'Should be 1:',Yield_magb_sim/Yield_magb
print 'Should be 1:',Yield_massive_sim/Yield_massive
Explanation: Compare final yields:
End of explanation
s1.plot_mass(specie='C',label='C',color='r',shape='-',marker='o',markevery=800)
s1.plot_mass(specie='N',label='N',color='b',shape='-',marker='o',markevery=800)
s1.plot_mass(specie='O',label='O',color='g',shape='-',marker='o',markevery=800)
m=[1,1.65,2,3,4,5,6,7,12,15,20,25]
#ages=[1.177e10,2.172e9,1.265e9,4.141e8,1.829e8,1.039e8,6.95e7,5.022e7,1.165e7,8.109e6,6.628e6]
ages=[5.67e9,1.211e9,6.972e8,2.471e8,1.347e8,8.123e7,5.642e7,4.217e7,1.892e7,1.381e7,9.895e6,7.902e6] #0.0001 lifetiems
def yields(min1,max1,k_N):
return ( k_N/1.35 * (min1**-1.35 - max1**-1.35) ) * 0.1
yields1_lagb=[]
age_lagb=[]
yields1_magb=[]
age_magb=[]
yields1_massive=[]
age_massive=[]
for m1 in m:
idx=m.index(m1)
#print m1,idx
if m1>=1 and m1<=3.5:
yields1_lagb.append(yields(m1,3.5,k_N))
age_lagb.append(ages[idx])
#print yields(1,m1,k_N)
#print ages[idx]
if m1>=3.5 and m1<=8.:
yields1_magb.append(yields(m1,8,k_N))
age_magb.append(ages[idx])
if m1>=8 and m1<=30:
yields1_massive.append(yields(m1,30,k_N))
age_massive.append(ages[idx])
plt.plot(age_lagb,yields1_lagb,marker='+',color='r',linestyle='',markersize=30,label='C*')
plt.plot(age_magb,yields1_magb,marker='+',color='b',linestyle='',markersize=30,label='N*')
plt.plot(age_massive,yields1_massive,marker='+',color='g',linestyle='',markersize=30,label='O*')
plt.legend(loc=4,prop={'size':14})
plt.xlim(7e6,1.5e10)
Explanation: Plotting
End of explanation
Yield_tot_sim=s1.history.ism_iso_yield_1a[-1][0]
from scipy.interpolate import UnivariateSpline
zm_lifetime_grid=s1.zm_lifetime_grid_current
idx_z = (np.abs(zm_lifetime_grid[0]-0.0001)).argmin() #Z=0
grid_masses=zm_lifetime_grid[1][::-1]
grid_lifetimes=zm_lifetime_grid[2][idx_z][::-1]
spline_degree1=2
smoothing1=0
boundary=[None,None]
spline_lifetime = UnivariateSpline(grid_lifetimes,np.log10(grid_masses),bbox=boundary,k=spline_degree1,s=smoothing1)
from scipy.integrate import quad
def spline1(t):
minm_prog1a=3
#if minimum progenitor mass is larger than 3Msun due to IMF range:
return max(minm_prog1a,10**spline_lifetime(np.log10(t)))
#funciton giving the total (accummulatitive) number of WDs at each timestep
def wd_number(m,t):
#print 'time ',t
#print 'mass ',m
mlim=10**spline_lifetime(np.log10(t))
maxm_prog1a=8
if mlim>maxm_prog1a:
return 0
else:
mmin=0
mmax=0
inte=0
#normalized to 1msun!
def g2(mm):
return mm*mm**-2.35
norm=1./quad(g2,1,30)[0]
return norm*m**-2.35 #self.__imf(mmin,mmax,inte,m)
def maoz_sn_rate(m,t):
return wd_number(m,t)* 4.0e-13 * (t/1.0e9)**-1
def maoz_sn_rate_int(t):
return quad( maoz_sn_rate,spline1(t),8,args=t)[0]
#in this formula, (paper) sum_sn1a_progenitors number of
maxm_prog1a=8
longtimefornormalization=1.3e10 #yrs
A = 1e-3 / quad(maoz_sn_rate_int,0,longtimefornormalization)[0]
print 'Norm. constant A:',A
n1a= A* quad(maoz_sn_rate_int,0,1.3e10)[0]
Yield_sn1a=n1a*1e11*0.1 #specialfactor
print 'Should be 1:',Yield_sn1a_sim/Yield_sn1a
print 'Check specific Ni-56: ',s1.history.ism_iso_yield[-1][-1]/Yield_sn1a #last isotope in s1.history.isotopes, see above
Explanation: Simulation results compared with semi-analytical calculations with C,N,O.
Distinguish between all 3 sources (AGB,massive AND SNIA)
DTD taken from Maoz
End of explanation
s2=s.sygma(iolevel=0,transitionmass=7.2,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
s3=s.sygma(iolevel=0,transitionmass=8,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
s4=s.sygma(iolevel=0,transitionmass=9,mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
N_agb=k_N/1.35 * (1**-1.35 - 7.2**-1.35)
Yield_agb7=0.1*N_agb
N_massive=k_N/1.35 * (7.2**-1.35 - 30**-1.35)
Yield_massive7=0.1*N_massive
N_agb=k_N/1.35 * (1**-1.35 - 8.**-1.35)
Yield_agb8=0.1*N_agb
N_massive=k_N/1.35 * (8.**-1.35 - 30**-1.35)
Yield_massive8=0.1*N_massive
N_agb=k_N/1.35 * (1**-1.35 - 9.**-1.35)
Yield_agb9=0.1*N_agb
N_massive=k_N/1.35 * (9.**-1.35 - 30**-1.35)
Yield_massive9=0.1*N_massive
print 'should be 1:',sum(s2.history.ism_elem_yield_agb[-1])/Yield_agb7
print 'should be 1:',sum(s2.history.ism_elem_yield_massive[-1])/Yield_massive7
print 'should be 1:',sum(s3.history.ism_elem_yield_agb[-1])/Yield_agb8
print 'should be 1:',sum(s3.history.ism_elem_yield_massive[-1])/Yield_massive8
print 'should be 1:',sum(s4.history.ism_elem_yield_agb[-1])/Yield_agb9
print 'should be 1:',sum(s4.history.ism_elem_yield_massive[-1])/Yield_massive9
fig=4
s2.plot_totmasses(fig=fig,mass='gas', source='all', norm='no', label='Tot,7Msun', shape='', marker='o', color='', markevery=20, log=True)
s2.plot_totmasses(fig=fig,mass='gas', source='agb', norm='no', label='AGB, 7Msun', shape='', marker='s', color='', markevery=20, log=True)
s2.plot_totmasses(fig=fig,mass='gas', source='massive', norm='no', label='Massive, 7Msun', shape='', marker='D', color='', markevery=20, log=True)
s3.plot_totmasses(fig=fig,mass='gas', source='all', norm='no', label='Tot, 8Msun', shape='', marker='x', color='', markevery=20, log=True)
s3.plot_totmasses(fig=fig,mass='gas', source='agb', norm='no', label='AGB, 8Msun', shape='', marker='+', color='', markevery=20, log=True)
s3.plot_totmasses(fig=fig,mass='gas', source='massive', norm='no', label='Massive, 8Msun', shape='', marker='>', color='', markevery=20, log=True)
s4.plot_totmasses(fig=fig,mass='gas', source='all', norm='no', label='Tot, 9Msun', shape='', marker='p', color='', markevery=20, log=True)
s4.plot_totmasses(fig=fig,mass='gas', source='agb', norm='no', label='AGB, 9Msun', shape='', marker='^', color='', markevery=20, log=True)
s4.plot_totmasses(fig=fig,mass='gas', source='massive', norm='no', label='Massive, 9Msun', shape='', marker='+', color='', markevery=20, log=True)
plt.legend(prop={'size':12})
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5),markerscale=0.8,fontsize=12)
plt.ylim(4e6,4e9)
Explanation: Test of parameter transitionmass
default (above) is 8Msun; needs to chosen so it agrees with yield input!
End of explanation
reload(s)
s1=s.sygma(iolevel=0,exclude_masses=[32.,60.],mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
s2=s.sygma(iolevel=0,exclude_masses=[32.,60.,7,6],mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
#s3=s.sygma(iolevel=1,exclude_masses=[],mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=True,iniZ=-1,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
s3=s.sygma(iolevel=0,exclude_masses=[32.,60.,7,6,3],mgal=1e11,dt=1e7,tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt', sn1a_table='yield_tables/sn1a_cnoni.txt', iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
# k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
Yield=0.1*N_tot
print 'Should be 1:',sum(s1.history.ism_iso_yield[-1])/Yield
print 'Should be 1:',sum(s2.history.ism_iso_yield[-1])/Yield
N_tot=k_N/1.35 * (1**-1.35 - 8**-1.35) #(II)
Yield=0.1*N_tot
print sum(s1.history.ism_elem_yield_agb[-1])/Yield
N_tot=k_N/1.35 * (8**-1.35 - 30**-1.35) #(II)
Yield=0.1*N_tot
print sum(s1.history.ism_elem_yield_massive[-1])/Yield
Explanation: Check of the exclude_masses parameter
Default is exclude_masses=[32.,60.] because both can be only used in SSPs of solar Z and in no continous SFR simulations.
This test requires to check the output and see if there is any 6M or 7Msun yield taken.
End of explanation
Yield_lagb_sim=s3.history.ism_iso_yield[-1][0]
Yield_magb_sim=s3.history.ism_iso_yield[-1][1]
Yield_massive_sim=s3.history.ism_iso_yield[-1][2]
N_lagb=k_N/1.35 * (1**-1.35 - 3**-1.35)
Yield_lagb=0.1*N_lagb
N_magb=k_N/1.35 * (3**-1.35 - 8.**-1.35)
Yield_magb=0.1*N_magb
N_massive=k_N/1.35 * (8.**-1.35 - 30**-1.35)
Yield_massive=0.1*N_massive
print 'Should be 1:',Yield_lagb_sim/Yield_lagb
print 'Should be 1:',Yield_magb_sim/Yield_magb
print 'Should be 1:',Yield_massive_sim/Yield_massive
Explanation: For case where 3Msun excluded, which is low-mass with C, the boundary (3.5Msun) changes to 3Msun and hence N-14 is ejected in lower-mass stars.
End of explanation
s1=s.sygma(iolevel=0,exclude_masses=[32.,60.,7,6],transitionmass=6,mgal=1e11,dt=1e7,
tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,
hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt',
sn1a_table='yield_tables/sn1a_cnoni.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
s2=s.sygma(iolevel=0,exclude_masses=[32.,60.,12],transitionmass=13,mgal=1e11,dt=1e7,
tend=1.3e10,imf_type='salpeter',alphaimf=2.35,imf_bdys=[1,30],sn1a_on=False,
hardsetZ=0.0001,table='yield_tables/isotope_yield_table_cnoni.txt',
sn1a_table='yield_tables/sn1a_cnoni.txt',
iniabu_table='yield_tables/iniabu/iniab1.0E-04GN93_alpha_cnoni.ppn')
k_N=1e11*0.35/ (1**-0.35 - 30**-0.35) #(I)
N_tot=k_N/1.35 * (1**-1.35 - 30**-1.35) #(II)
Yield=0.1*N_tot
print 'Should be 1:',sum(s1.history.ism_iso_yield[-1])/Yield
fig=1
s1.plot_totmasses(fig=fig,marker='^',label='all, mt=6')
s1.plot_totmasses(fig=fig,marker='>',source='agb',label='agb,mt=6')
s1.plot_totmasses(fig=fig,marker='<',source='massive',label='massive,mt=6')
s1.plot_totmasses(fig=fig,source='sn1a',label='sn1a,mt=6')
s2.plot_totmasses(fig=fig,label='all, mt=12')
s2.plot_totmasses(fig=fig,source='agb',label='agb,mt=12')
s2.plot_totmasses(fig=fig,source='massive',label='massive,mt=12')
s2.plot_totmasses(fig=fig,source='sn1a',label='sn1a,mt=12')
Explanation: With transitionmass and exclude_mass: Change transitionmass to 6Msun
transition masses at : 6,13Msun. excluded in one case 6,7 in the other 12.
End of explanation |
14,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()# bag of words here
for idx, row in reviews.iterrows():
total_counts.update(row[0].lower().replace(",", " ").replace(".", " ").split(" "))
print("Total words in data set: ", len(total_counts))
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {}## create the word-to-index dictionary here
for i, word in enumerate(vocab):
word2idx[word] = i
Explanation: The last word in our vocabulary shows up 30 times in 25000 reviews. I think it's fair to say this is a tiny proportion. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
# Slow....
def text_to_vector(text):
words_vector = np.zeros(len(vocab))
words = text.lower().replace(",", " ").replace(".", " ").split(" ")
keys = list(word2idx)
for key in words:
if key in keys:
words_vector[word2idx[key]] += 1
return words_vector
# Mat's Fast solution
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.replace(",", " ").replace(".", " ").split(" "):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] = 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainX.shape[1]
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, trainX.shape[1]])
net = tflearn.fully_connected(net, 200 , activation='ReLU')
net = tflearn.fully_connected(net, 25 , activation='ReLU')
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
14,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Libraries
Step1: Read image and Check inspect values of image at different locations
Step2: RGB pixel intensity 0-255
Step3: RGB line intensity 0-255 | Python Code:
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Import Libraries
End of explanation
img_RGB = cv2.imread('demo1.jpg')
plt.imshow(cv2.cvtColor(img_RGB, cv2.COLOR_BGR2RGB))
print('Shape_RGB:', img_RGB.shape)
print('Type_RGB:', img_RGB.dtype)
Explanation: Read image and Check inspect values of image at different locations
End of explanation
print('RGB intensity at 300,250:',img_RGB[300,250])
Explanation: RGB pixel intensity 0-255
End of explanation
img_RGB_lineintensity = img_RGB[600]
plt.plot(img_RGB_lineintensity)
Explanation: RGB line intensity 0-255
End of explanation |
14,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook for cellpy batch processing
You can fill inn the MarkDown cells (the cells without "numbering") by double-clicking them. Also remember, press shift + enter to execute a cell.
A couple of useful links
Step1: Creating pages and initialise the cellpy batch object
If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages.
Step2: Initialisation
Step3: Set parameters
Step4: Run
Step5: 4. Looking at the data
Summaries
Step6: Cycles
Step7: Selecting specific cells and investigating them
Step8: Let's see how the smoothing (interpolation) method works
Step9: Using hvplot for plotting summaries
You can for example use hvplot for looking more at your summary data
Step10: Looking more in-depth and utilising advanced features
OCV relaxation points
Picking out 5 points on each OCV relaxation curve (distributed by last, last/2, last/2/2, ..., first).
Step11: Looking closer at some summary-plots
Step12: 5. Checking for more details per cycle
A. pick the CellpyData object for one of the cells
Step13: B. Get some voltage curves for some cycles and plot them
The method get_cap can be used to extract voltage curves.
Step14: Looking at some dqdv data
Get capacity cycles and make dqdv using the ica module
Step15: Put it in a for-loop for plotting many ica plots
Step16: Get all the dqdv data in one go | Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cellpy
from cellpy import prms
from cellpy import prmreader
from cellpy.utils import batch
import holoviews as hv
%matplotlib inline
hv.extension("bokeh")
######################################################################
## ##
## development ##
## ##
######################################################################
from pathlib import Path
from pprint import pprint
# Use these when working on my work PC:
test_data_path = r"C:\Scripting\MyFiles\development_cellpy\testdata"
out_data_path = r"C:\Scripting\Processing\Test\out"
# Use these when working on my MacBook:
test_data_path = "/Users/jepe/scripting/cellpy/dev_data/fullcell"
out_data_path = "/Users/jepe/scripting/cellpy/dev_data/out"
test_data_path = Path(test_data_path)
out_data_path = Path(out_data_path)
print(" SETTING SOME PRMS ".center(80, "="))
prms.Paths.db_filename = "cellpy_db.xlsx"
prms.Paths.cellpydatadir = test_data_path / "hdf5"
prms.Paths.outdatadir = out_data_path
prms.Paths.rawdatadir = test_data_path / "data"
prms.Paths.db_path = test_data_path / "db"
prms.Paths.filelogdir = out_data_path
pprint(prms.Paths)
## Uncomment this and run for checking your cellpy parameters.
# prmreader.info()
Explanation: Notebook for cellpy batch processing
You can fill inn the MarkDown cells (the cells without "numbering") by double-clicking them. Also remember, press shift + enter to execute a cell.
A couple of useful links:
- How to write MarkDown
- Jupyter notebooks
- cellpy
This notebook uses the following packages
python >= 3.6
cellpy >= 0.3.0
pandas
numpy
matplotlib
bokeh
pyviz (holoviews)
1. Key information about the current experiment
Experimental-id: xxx
Short-name: xxx
Project: project name
By: your name
Date: xx.xx.xxxx
2. Short summary of the experiment before processing
It is often helpful to formulate what you wanted to achieve with your experiment before actually going into depth of the data. I believe that it does not make you "biased" when processing your data, but instead sharpens your mind and motivates you to look more closely on your results. I might be wrong, off course. Then just skip filling in this part.
Main purpose
(State the main hypothesis for the current set of experiment)
Expected outcome
(What do you expect to find out? What kind of tests did you perform?)
Special considerations
(State if there are any special considerations for this experiment)
3. Processing data
Setting up everything
End of explanation
# Please fill in here
project = "ocv_tests"
name = "first"
Explanation: Creating pages and initialise the cellpy batch object
If you need to create Journal Pages, please provide appropriate names for the project and the experiment to allow cellpy to build the pages.
End of explanation
print(" INITIALISATION OF BATCH ".center(80, "="))
b = batch.init(name, project, default_log_level="INFO", db_reader=None)
Explanation: Initialisation
End of explanation
# setting some prms
b.experiment.export_raw = False
b.experiment.export_cycles = False
b.experiment.export_ica = False
b.experiment.all_in_memory = True # store all data in memory, defaults to False
b.experiment.force_raw_file = True
Explanation: Set parameters
End of explanation
# load info from your db and write the journal pages
b.create_journal(from_db=False)
b.pages
filename = "20190204_FC_snx012_01_cc_03"
mass = 0.5
total_mass = 1.0
loading = 0.1
fixed = False
label = "fc_snx012_01"
cell_type = "full_cell"
raw_file_name = [test_data_path / "20190204_FC_snx012_01_cc_01.res"]
cellpy_file_name = out_data_path / "20190204_FC_snx012_01_cc_01.h5"
group = 1
sub_group = 1
b.pages.loc[filename] = [
mass,
total_mass,
loading,
fixed,
label,
cell_type,
raw_file_name,
cellpy_file_name,
group,
sub_group,
]
b.pages
# create the apropriate folders
b.paginate()
# load the data (and save .csv-files if you have set export_(raw/cycles/ica) = True)
# (this might take some time)
b.update()
# collect summary-data (e.g. charge capacity vs cycle number) from each cell and export to .csv-file(s).
b.make_summaries()
print(" FINISHED ".center(80, "-"))
Explanation: Run
End of explanation
# Plot the charge capacity and the C.E. (and resistance) vs. cycle number (standard plot)
b.plot_summaries()
# Show the journal pages
# b.experiment.journal.pages.head()
# Show the most important part of the journal pages
b.report
# b.experiment.status()
# b.summaries.head()
Explanation: 4. Looking at the data
Summaries
End of explanation
b.experiment.cell_names
d = b.experiment.data["20190204_FC_snx012_01_cc_01"]
d
%%opts Curve (color=hv.Palette('Magma'))
voltage_curves = dict()
for label in b.experiment.cell_names:
d = b.experiment.data[label]
curves = d.get_cap(label_cycle_number=True, interpolated=True, number_of_points=100)
curve = hv.Curve(curves, kdims=["capacity", "cycle"], vdims="voltage").groupby("cycle").overlay().opts(show_legend=False)
voltage_curves[label] = curve
NdLayout = hv.NdLayout(voltage_curves, kdims='label').cols(3)
NdLayout
%%opts Curve (color=hv.Palette('Magma'))
ocv_curves = dict()
for label in b.experiment.cell_names:
d = b.experiment.data[label]
ocv_data = d.get_ocv(direction="up", number_of_points=40)
ocv_curve = hv.Curve(ocv_data, kdims=["Step_Time", "Cycle_Index"], vdims="Voltage").groupby("Cycle_Index").overlay().opts(show_legend=False)
ocv_curves[label] = ocv_curve
NdLayout = hv.NdLayout(ocv_curves, kdims='label').cols(3)
NdLayout
import numpy as np
df = pd.DataFrame(
{
"one": [1, 2, 3],
"two": [10, 20, 30],
"name0": ["ocv_one", "ocv", "ch"],
"name1": ["ocv_one", "ocv_two", None],
"name2": ["ocv_x", "xx", np.nan],
}
)
df
df.loc[df.name2.str.startswith("ocv", na=False), :]
Explanation: Cycles
End of explanation
# This will show you all your cell names
cell_labels = b.experiment.cell_names
cell_labels
# This is how to select the data (CellpyData-objects)
data1 = b.experiment.data["20190204_FC_snx012_01_cc_01"]
Explanation: Selecting specific cells and investigating them
End of explanation
# get voltage curves
df_cycles1 = data1.get_cap(
method="back-and-forth",
categorical_column=True,
label_cycle_number=True,
interpolated=False,
)
# get interpolated voltage curves
df_cycles2 = data1.get_cap(
method="back-and-forth",
categorical_column=True,
label_cycle_number=True,
interpolated=True,
dx=0.1,
number_of_points=100,
)
%%opts Scatter [width=600] (color="red", alpha=0.9, size=12)
single_curve = hv.Curve(df_cycles1, kdims=["capacity", "cycle"], vdims="voltage", label="not-smoothed").groupby("cycle")
single_scatter = hv.Scatter(df_cycles2, kdims=["capacity", "cycle"], vdims="voltage", label="smoothed").groupby("cycle")
single_scatter * single_curve
Explanation: Let's see how the smoothing (interpolation) method works
End of explanation
import hvplot.pandas
# hvplot does not like infinities
s = b.summaries.replace([np.inf, -np.inf], np.nan)
layout = (
s["coulombic_efficiency"].hvplot()
+ s["discharge_capacity"].hvplot() * s["charge_capacity"].hvplot()
)
layout.cols(1).opts()
s["cumulated_coulombic_efficiency"].hvplot()
Explanation: Using hvplot for plotting summaries
You can for example use hvplot for looking more at your summary data
End of explanation
from cellpy.utils.batch_tools.batch_analyzers import OCVRelaxationAnalyzer
print(" analyzing ocv relaxation data ".center(80, "-"))
analyzer = OCVRelaxationAnalyzer()
analyzer.assign(b.experiment)
analyzer.direction = "up"
analyzer.do()
dfs = analyzer.last
df_file_one = dfs[0]
df_file_one
# keeping only the columns with voltages (i.e. skipping "step", etc.)
ycols = [col for col in df_file_one.columns if col.find("point") >= 0]
# removing the first ocv rlx (relaxation before starting cycling)
df = df_file_one.iloc[1:, :]
df.head()
# tidy format
df = df.melt(id_vars="cycle", var_name="point", value_vars=ycols, value_name="voltage")
df
curve = (
hv.Curve(df, kdims=["cycle", "point"], vdims="voltage")
.groupby("point")
.overlay()
.opts(xlim=(1, 100), width=800)
)
scatter = (
hv.Scatter(df, kdims=["cycle", "point"], vdims="voltage")
.groupby("point")
.overlay()
.opts(
# xlim=(1,10), ylim=(0.7,1)
)
)
layout = hv.Layout(curve * scatter)
layout.cols(1)
Explanation: Looking more in-depth and utilising advanced features
OCV relaxation points
Picking out 5 points on each OCV relaxation curve (distributed by last, last/2, last/2/2, ..., first).
End of explanation
b.summary_columns
discharge_capacity = b.summaries.discharge_capacity
charge_capacity = b.summaries.charge_capacity
coulombic_efficiency = b.summaries.coulombic_efficiency
ir_charge = b.summaries.ir_charge
fig, (ax1, ax2) = plt.subplots(2, 1)
ax1.plot(discharge_capacity)
ax1.set_ylabel("capacity ")
ax2.plot(ir_charge)
ax2.set_xlabel("cycle")
ax2.set_ylabel("resistance")
Explanation: Looking closer at some summary-plots
End of explanation
# Lets check what cells we have
cell_labels = b.experiment.cell_names
cell_labels
# OK, then I choose one of them
data = b.experiment.data["20190204_FC_snx012_01_cc_01"]
Explanation: 5. Checking for more details per cycle
A. pick the CellpyData object for one of the cells
End of explanation
cap = data.get_cap(categorical_column=True)
cap.head()
fig, ax = plt.subplots()
ax.plot(cap.capacity, cap.voltage)
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
cv = data.get_cap(method="forth")
fig, ax = plt.subplots()
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
ax.plot(cv.capacity, cv.voltage)
c4 = data.get_cap(cycle=4, method="forth-and-forth")
c10 = data.get_cap(cycle=10, method="forth-and-forth")
fig, ax = plt.subplots()
ax.set_xlabel("capacity")
ax.set_ylabel("voltage")
ax.plot(c4.capacity, c4.voltage, "ro", label="cycle 4")
ax.plot(c10.capacity, c10.voltage, "bs", label="cycle 22")
ax.legend();
Explanation: B. Get some voltage curves for some cycles and plot them
The method get_cap can be used to extract voltage curves.
End of explanation
from cellpy.utils import ica
v4, dqdv4 = ica.dqdv_cycle(
data.get_cap(4, categorical_column=True, method="forth-and-forth")
)
v10, dqdv10 = ica.dqdv_cycle(
data.get_cap(10, categorical_column=True, method="forth-and-forth")
)
plt.plot(v4, dqdv4, label="cycle 4")
plt.plot(v10, dqdv10, label="cycle 10")
plt.legend();
Explanation: Looking at some dqdv data
Get capacity cycles and make dqdv using the ica module
End of explanation
fig, ax = plt.subplots()
for cycle in data.get_cycle_numbers():
d = data.get_cap(cycle, categorical_column=True, method="forth-and-forth")
if not d.empty:
v, dqdv = ica.dqdv_cycle(d)
ax.plot(v, dqdv)
else:
print(f"cycle {cycle} seems to be missing or corrupted")
Explanation: Put it in a for-loop for plotting many ica plots
End of explanation
hv.extension("bokeh")
tidy_ica = ica.dqdv_frames(data)
cycles = list(range(2, 10)) + [20, 50, 100, 300]
tidy_ica = tidy_ica.loc[tidy_ica.cycle.isin(cycles), :]
%%opts Curve [xlim=(2.8,4.4), ylim=(-30000, 30000)] (color=hv.Palette('Magma'), alpha=0.9) NdOverlay [legend_position='right', width=800, height=500]
curve4 = (hv.Curve(tidy_ica, kdims=['voltage'], vdims=['dq', 'cycle'], label="Incremental capacity plot")
.groupby("cycle")
.overlay()
)
curve4
Explanation: Get all the dqdv data in one go
End of explanation |
14,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 07b
Step1: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function
Step2: Exploratory data analysis
According to its documentation, the Auto MPG dataset consists of eight explantory variables (i.e. features), each describing a single car model, which are related to the given target variable
Step3: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead
Step4: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set
Step5: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric
Step6: We can correct this by converting the column values numbers manually, using pandas' to_numeric function
Step7: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
According to the documentation, the origin variable is categoric (i.e. origin = 1 is not "less than" origin = 2) and so we should encode it via one hot encoding so that our model can make sense of it. This is easy with pandas
Step8: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a decsion tree regression model, we don't need to worry about the effects of multicollinearity, and so there's no need to drop one of the encoded variable columns as we did in the case of linear regression.
Next, let's take a look at the distribution of the variables in the data frame. We can start by computing some descriptive statistics
Step9: Print a matrix of pairwise Pearson correlation values
Step10: Let's also create a scatter plot matrix
Step11: Based on the above information, we can conclude the following
Step12: You can find a more detailed description of each parameter in the scikit-learn documentation.
Let's use a grid search to select the optimal decision tree regression model from a set of candidates. First, we define the parameter grid. Then, we can use a grid search to select the best model via an inner cross validation and an outer cross validation to measure the accuracy of the selected model.
Step13: Our decision tree regression model predicts the MPG with an average error of approximately ±2.32 with a standard deviation of 3.16, which is similar to our final linear regression model from Lab 06. It's also worth noting that we were able to achieve this level of accuracy with very little feature engineering effort. This is because decision tree regression does not rely on the same set of assumptions (e.g. linearity) as linear regression, and so is able to learn from data with less manual tuning.
We can check the parameters that led to the best model via the best_params_ attribute of the output of our grid search, as follows
Step14: Random forest regression
Next, let's build a random forest regression model to predict the car MPGs to see if we can improve on our decision tree model. Random forests are ensemble models, i.e. they are a collection of different decision trees, each of which is trained on a random subset of the data. By combining trees with different characteristics, it's possible to form an overall model that can utilise the benefits of each, which often produces better results than using a single tree to model all the data. scikit-learn supports ensemble model functionality via the ensemble subpackage. This subpackage supports both random forest regression and classification. We can use the RandomForestRegressor class to build our model.
RandomForestRegressor accepts a number of different hyperparameters and the model we build may be more or less accurate depending on their values. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this
Step15: As before, you can find a more detailed description of each parameter in the scikit-learn documentation.
Let's use a grid search to select the optimal random forest regression model from a set of candidates. First, we define the parameter grid. Then, we can use a grid search to select the best model via an inner cross validation and an outer cross validation to measure the accuracy of the selected model. | Python Code:
%matplotlib inline
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import GridSearchCV, KFold, cross_val_predict
Explanation: Lab 07b: Decision tree regression
Introduction
This lab focuses on data modelling using decision tree and random forest regression. It's a direct counterpart to the linear regression modelling in Lab 06. At the end of the lab, you should be able to use scikit-learn to:
Create a decision tree regression model and a random forest regression model.
Use the models to predict new values.
Measure the accuracy of the models.
Getting started
Let's start by importing the packages we'll need. As usual, we'll import pandas for exploratory analysis, but this week we're also going to use the tree subpackage from scikit-learn to create decision tree models and the ensemble subpackage to create random forest models.
End of explanation
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
df = pd.read_fwf(url, header=None, names=['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'model year', 'origin', 'car name'])
Explanation: Next, let's load the data. This week, we're going to load the Auto MPG data set, which is available online at the UC Irvine Machine Learning Repository. The dataset is in fixed width format, but fortunately this is supported out of the box by pandas' read_fwf function:
End of explanation
df.head()
Explanation: Exploratory data analysis
According to its documentation, the Auto MPG dataset consists of eight explantory variables (i.e. features), each describing a single car model, which are related to the given target variable: the number of miles per gallon (MPG) of fuel of the given car. The following attribute information is given:
mpg: continuous
cylinders: multi-valued discrete
displacement: continuous
horsepower: continuous
weight: continuous
acceleration: continuous
model year: multi-valued discrete
origin: multi-valued discrete
car name: string (unique for each instance)
Let's start by taking a quick peek at the data:
End of explanation
df = df.set_index('car name')
df.head()
Explanation: As the car name is unique for each instance (according to the dataset documentation), it cannot be used to predict the MPG by itself so let's drop it as a feature and use it as the index instead:
Note: It seems plausible that MPG efficiency might vary from manufacturer to manufacturer, so we could generate a new feature by converting the car names into manufacturer names, but for simplicity lets just drop them here.
End of explanation
df = df[df['horsepower'] != '?']
Explanation: According to the documentation, the horsepower column contains a small number of missing values, each of which is denoted by the string '?'. Again, for simplicity, let's just drop these from the data set:
End of explanation
df.dtypes
Explanation: Usually, pandas is smart enough to recognise that a column is numeric and will convert it to the appropriate data type automatically. However, in this case, because there were strings present initially, the value type of the horsepower column isn't numeric:
End of explanation
df['horsepower'] = pd.to_numeric(df['horsepower'])
# Check the data types again
df.dtypes
Explanation: We can correct this by converting the column values numbers manually, using pandas' to_numeric function:
End of explanation
df = pd.get_dummies(df, columns=['origin'])
df.head()
Explanation: As can be seen, the data type of the horsepower column is now float64, i.e. a 64 bit floating point value.
According to the documentation, the origin variable is categoric (i.e. origin = 1 is not "less than" origin = 2) and so we should encode it via one hot encoding so that our model can make sense of it. This is easy with pandas: all we need to do is use the get_dummies method, as follows:
End of explanation
df.describe()
Explanation: As can be seen, one hot encoding converts the origin column into separate binary columns, each representing the presence or absence of the given category. Because we're going to use a decsion tree regression model, we don't need to worry about the effects of multicollinearity, and so there's no need to drop one of the encoded variable columns as we did in the case of linear regression.
Next, let's take a look at the distribution of the variables in the data frame. We can start by computing some descriptive statistics:
End of explanation
df.corr()
Explanation: Print a matrix of pairwise Pearson correlation values:
End of explanation
pd.plotting.scatter_matrix(df, s=50, hist_kwds={'bins': 10}, figsize=(16, 16));
Explanation: Let's also create a scatter plot matrix:
End of explanation
DecisionTreeRegressor().get_params()
Explanation: Based on the above information, we can conclude the following:
Based on a quick visual inspection, there don't appear to be significant numbers of outliers in the data set. (We could make boxplots for each variable - but let's save time and skip it here.)
Most of the explanatory variables appear to have a non-linear relationship with the target.
There is a high degree of correlation ($r > 0.9$) between cylinders and displacement and, also, between weight and displacement.
The following variables appear to be left-skewed: mpg, displacement, horsepower, weight.
The acceleration variable appears to be normally distributed.
The model year follows a rough uniform distributed.
The cylinders and origin variables have few unique values.
For now, we'll just note this information, but we'll come back to it later when improving our model.
Data Modelling
Decision tree regression
Let's build a decision tree regression model to predict the MPG of a car based on its other attributes. scikit-learn supports decision tree functionality via the tree subpackage. This subpackage supports both decision tree regression and classification. We can use the DecisionTreeRegressor class to build our model.
DecisionTreeRegressor accepts a number of different hyperparameters and the model we build may be more or less accurate depending on their values. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this:
End of explanation
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
algorithm = DecisionTreeRegressor(random_state=0)
# Build models for different values of min_samples_leaf and min_samples_split
parameters = {
'min_samples_leaf': [1, 10, 20],
'min_samples_split': [2, 10, 20] # Min value is 2
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(algorithm, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the decision tree regression model',
xlabel='Error'
);
Explanation: You can find a more detailed description of each parameter in the scikit-learn documentation.
Let's use a grid search to select the optimal decision tree regression model from a set of candidates. First, we define the parameter grid. Then, we can use a grid search to select the best model via an inner cross validation and an outer cross validation to measure the accuracy of the selected model.
End of explanation
clf.best_params_
Explanation: Our decision tree regression model predicts the MPG with an average error of approximately ±2.32 with a standard deviation of 3.16, which is similar to our final linear regression model from Lab 06. It's also worth noting that we were able to achieve this level of accuracy with very little feature engineering effort. This is because decision tree regression does not rely on the same set of assumptions (e.g. linearity) as linear regression, and so is able to learn from data with less manual tuning.
We can check the parameters that led to the best model via the best_params_ attribute of the output of our grid search, as follows:
End of explanation
RandomForestRegressor().get_params()
Explanation: Random forest regression
Next, let's build a random forest regression model to predict the car MPGs to see if we can improve on our decision tree model. Random forests are ensemble models, i.e. they are a collection of different decision trees, each of which is trained on a random subset of the data. By combining trees with different characteristics, it's possible to form an overall model that can utilise the benefits of each, which often produces better results than using a single tree to model all the data. scikit-learn supports ensemble model functionality via the ensemble subpackage. This subpackage supports both random forest regression and classification. We can use the RandomForestRegressor class to build our model.
RandomForestRegressor accepts a number of different hyperparameters and the model we build may be more or less accurate depending on their values. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this:
End of explanation
X = df.drop('mpg', axis='columns') # X = features
y = df['mpg'] # y = prediction target
algorithm = RandomForestRegressor(random_state=0)
# Build models for different values of n_estimators, min_samples_leaf and min_samples_split
parameters = {
'n_estimators': [2, 5, 10],
'min_samples_leaf': [1, 10, 20],
'min_samples_split': [2, 10, 20] # Min value is 2
}
# Use inner CV to select the best model
inner_cv = KFold(n_splits=5, shuffle=True, random_state=0) # K = 5
clf = GridSearchCV(algorithm, parameters, cv=inner_cv, n_jobs=-1) # n_jobs=-1 uses all available CPUs = faster
clf.fit(X, y)
# Use outer CV to evaluate the error of the best model
outer_cv = KFold(n_splits=10, shuffle=True, random_state=0) # K = 10, doesn't have to be the same
y_pred = cross_val_predict(clf, X, y, cv=outer_cv)
# Print the results
print('Mean absolute error: %f' % mean_absolute_error(y, y_pred))
print('Standard deviation of the error: %f' % (y - y_pred).std())
ax = (y - y_pred).hist()
ax.set(
title='Distribution of errors for the random forest regression model',
xlabel='Error'
);
Explanation: As before, you can find a more detailed description of each parameter in the scikit-learn documentation.
Let's use a grid search to select the optimal random forest regression model from a set of candidates. First, we define the parameter grid. Then, we can use a grid search to select the best model via an inner cross validation and an outer cross validation to measure the accuracy of the selected model.
End of explanation |
14,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Story
The data (Bondora's loan book) can be download from
Step1: Number of loans per year
Step2: From the initial analysis we can see that the number of loans is definitely growing over time. This can be caused by a higher demand for loans or rise in popularity of Bondora.
Median salary per year and country
Step3: We can see that, generally, the income of the borrowers increases over time. This is an expected behaviour as the countries, where Bondora operates, have seen an increase of average salary over the last years.
Loan amount analysis
Step4: List of the top 30 loan amount (round to nearest 100) with counts
Step5: The most common loan amount is 500 EUR with the first 13 being lower or equal to 3100 EUR.
Distributions of loan amounts over years
Step6: In the first couple of years the loans were relatively much lower then in the last years. Average, minimum and maximum loan amounts increase over time.
Distribution of loan amounts per country
Step7: Finland has the highest most frequent loan amount (about 2100 EUR) and Estonia the lowest (about 500 EUR). The shapes of the distrubtions are similar across all the countries.
Loan duration analysis
Step8: Loan duration with relation to the amount
Step9: There is a visible linear dependency between the amount borrowed and loan duration -- the longer the loan the higher amount borrowed.
Loan duration with relation to year of issue
Step10: Over the first three years Bondora issued loans of maximum 24 months duration, but since 2013 the maximum duration is 60 months. We can see that the most popular durations in the recent years are 36 and 60 months with very few borrowers choosing durations lower than 12 months.
Step11: Number of dependants vs age
Step12: More than half of the borrowers have no dependants at all with very few borrowers have more than 5 dependants.
Step13: We can see a non linear dependency between the age of the borrower and number of the dependants, gradually increasing from the age of 18, reaching peak between 40-45, and then gradually decreasing.
Number of loans listed per year month
Step14: From the analysis of the loans listed per yearmonth, it is clearly visible, that Slovakian loans were listed only for a short period of time (mostly 2014) and since then borrowing from that country has been phased out.
Distribution of loan amounts for genders
Step15: H0 | Python Code:
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
import warnings
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
pd.options.display.max_rows = 125
import seaborn as sns
sns.set(color_codes=True)
sns.set(rc={"figure.figsize": (16, 4)})
loandata = pd.read_csv("data/loandata.csv", low_memory=False)
loandata['year'] = pd.to_datetime(loandata['ListedOnUTC']).dt.year
loandata['yearmonth'] = pd.to_datetime(loandata['ListedOnUTC']).dt.to_period('M')
recentld = loandata[loandata['year'] > 2012]
repaid = recentld[recentld['Status'] == 'Repaid']
(loandata.shape, recentld.shape)
Explanation: Data Story
The data (Bondora's loan book) can be download from: https://www.bondora.com/marketing/media/LoanData.zip
End of explanation
countByYear = loandata.groupby('year').size()
plot = sns.barplot(x=countByYear.index,y=countByYear)
Explanation: Number of loans per year
End of explanation
t = loandata[['year', 'IncomeTotal', 'Country']]
t = t[(t['year'] > 2010) & (t['year'] < 2017)]
plot = t.groupby(['year', 'Country']).median().unstack(1).plot(kind='bar', figsize=(16, 4))
Explanation: From the initial analysis we can see that the number of loans is definitely growing over time. This can be caused by a higher demand for loans or rise in popularity of Bondora.
Median salary per year and country
End of explanation
plot = sns.distplot(loandata['Amount'].astype(int), bins=50)
Explanation: We can see that, generally, the income of the borrowers increases over time. This is an expected behaviour as the countries, where Bondora operates, have seen an increase of average salary over the last years.
Loan amount analysis
End of explanation
plot = (loandata['Amount'] // 100 * 100).value_counts().head(30).plot(kind='bar')
Explanation: List of the top 30 loan amount (round to nearest 100) with counts:
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="year", y="Amount", data=loandata[['Amount', 'year']])
Explanation: The most common loan amount is 500 EUR with the first 13 being lower or equal to 3100 EUR.
Distributions of loan amounts over years
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="Country", y="Amount", data=loandata[['Amount', 'Country']])
Explanation: In the first couple of years the loans were relatively much lower then in the last years. Average, minimum and maximum loan amounts increase over time.
Distribution of loan amounts per country
End of explanation
pd.options.mode.chained_assignment = None # default='warn'
t = loandata[['Amount', 'LoanDuration']]
t['LoanDuration2'] = t['LoanDuration'] // 12 * 12
plot = sns.distplot(loandata['LoanDuration'], bins=50) # remove density
Explanation: Finland has the highest most frequent loan amount (about 2100 EUR) and Estonia the lowest (about 500 EUR). The shapes of the distrubtions are similar across all the countries.
Loan duration analysis
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="LoanDuration2", y="Amount", data=t)
Explanation: Loan duration with relation to the amount
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="year", y="LoanDuration", data=loandata[['year', 'LoanDuration']])
Explanation: There is a visible linear dependency between the amount borrowed and loan duration -- the longer the loan the higher amount borrowed.
Loan duration with relation to year of issue
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="year", y="LoanDuration", data=repaid[['year', 'LoanDuration']])
Explanation: Over the first three years Bondora issued loans of maximum 24 months duration, but since 2013 the maximum duration is 60 months. We can see that the most popular durations in the recent years are 36 and 60 months with very few borrowers choosing durations lower than 12 months.
End of explanation
p = loandata[['Age', 'NrOfDependants']]
p['DepNum'] = pd.to_numeric(loandata.NrOfDependants, errors='coerce')
plot = p.groupby('NrOfDependants').size().sort_values().plot(kind='bar')
Explanation: Number of dependants vs age
End of explanation
p = p.dropna().astype(int)
grid = sns.lmplot(x="Age", y="NrOfDependants", data=p, fit_reg=False, size=6, aspect=3)
Explanation: More than half of the borrowers have no dependants at all with very few borrowers have more than 5 dependants.
End of explanation
loandata['yearmonth'] = pd.to_datetime(loandata['ListedOnUTC']).dt.to_period('M')
plot = loandata.groupby(['yearmonth', 'Country']).size().unstack(1).sort_index(ascending=True).fillna(0).plot(figsize=(16, 5))
Explanation: We can see a non linear dependency between the age of the borrower and number of the dependants, gradually increasing from the age of 18, reaching peak between 40-45, and then gradually decreasing.
Number of loans listed per year month
End of explanation
plot = sns.violinplot(cut=0, scale="width", x="Amount", y="Gender", orient='h', data=recentld)
df = recentld
m = df[df['Gender'] == 0.0]
f = df[df['Gender'] == 1.0]
(m.shape, f.shape)
v = 'Amount'
m_mean = m[v].dropna().mean()
f_mean = f[v].dropna().mean()
std = df[v].dropna().std()
z = (m_mean - f_mean) / std
(m_mean, f_mean, std, m_mean - f_mean, z)
Explanation: From the analysis of the loans listed per yearmonth, it is clearly visible, that Slovakian loans were listed only for a short period of time (mostly 2014) and since then borrowing from that country has been phased out.
Distribution of loan amounts for genders
End of explanation
repaid['Defaulted'] = repaid['PrincipalPaymentsMade'] < repaid['Amount']
repaid[['Defaulted', 'PrincipalPaymentsMade', 'InterestAndPenaltyPaymentsMade', 'Amount', 'Interest']]
print(repaid.shape)
print(repaid['Defaulted'].mean())
print(repaid['PrincipalPaymentsMade'].sum() / repaid['Amount'].sum())
print(repaid['InterestAndPenaltyPaymentsMade'].sum() / repaid['Amount'].sum())
print((repaid['PrincipalPaymentsMade'].sum() + repaid['InterestAndPenaltyPaymentsMade'].sum()) / repaid['Amount'].sum())
Explanation: H0: No difference between mean loan amount for female borrowers and male borrowers. H1 - there is a difference.
Historical repayment rate of principal and amount of interest with penalties
End of explanation |
14,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KFServing Pipeline samples
This notebook assumes your cluster has KFServing >= v0.5.0 installed which supports the v1beta1 API.
Install the necessary kfp library
Step1: TensorFlow example
Step2: Custom model example | Python Code:
!pip3 install kfp --upgrade
import kfp.compiler as compiler
import kfp.dsl as dsl
import kfp
from kfp import components
# Create kfp client
# Note: Add the KubeFlow Pipeline endpoint below if the client is not running on the same cluster.
# Example: kfp.Client('http://192.168.1.27:31380/pipeline')
client = kfp.Client()
EXPERIMENT_NAME = 'KFServing Experiments'
experiment = client.create_experiment(name=EXPERIMENT_NAME, namespace='anonymous')
Explanation: KFServing Pipeline samples
This notebook assumes your cluster has KFServing >= v0.5.0 installed which supports the v1beta1 API.
Install the necessary kfp library
End of explanation
kfserving_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/kfserving/component.yaml')
@dsl.pipeline(
name='KFServing pipeline',
description='A pipeline for KFServing.'
)
def kfservingPipeline(
action='apply',
model_name='tensorflow-sample',
model_uri='gs://kfserving-samples/models/tensorflow/flowers',
namespace='anonymous',
framework='tensorflow'):
kfserving = kfserving_op(action = action,
model_name=model_name,
model_uri=model_uri,
namespace=namespace,
framework=framework).set_image_pull_policy('Always')
# Compile pipeline
compiler.Compiler().compile(kfservingPipeline, 'tf-flower.tar.gz')
# Execute pipeline
run = client.run_pipeline(experiment.id, 'tf-flower', 'tf-flower.tar.gz')
Explanation: TensorFlow example
End of explanation
kfserving_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/kfserving/component.yaml')
@dsl.pipeline(
name='KFServing pipeline',
description='A pipeline for KFServing.'
)
def kfservingPipeline(
action='apply',
model_name='max-image-segmenter',
namespace='anonymous',
custom_model_spec='{"name": "image-segmenter", "image": "codait/max-image-segmenter:latest", "port": "5000"}'
):
kfserving = kfserving_op(action=action,
model_name=model_name,
namespace=namespace,
custom_model_spec=custom_model_spec).set_image_pull_policy('Always')
# Compile pipeline
compiler.Compiler().compile(kfservingPipeline, 'custom.tar.gz')
# Execute pipeline
run = client.run_pipeline(experiment.id, 'custom-model', 'custom.tar.gz')
Explanation: Custom model example
End of explanation |
14,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedding CPLEX in a ML Spark Pipeline
Spark ML provides a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.
In this notebook, we show how to embed CPLEX as a Spark transformer class.
DOcplex provides transformer classes that take a matrix X of constraints and a vector y of costs and solves a linear problem using CPLEX.
Transformer classes share a solve(X, Y, **params) method which expects
Step1: In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
The FOOD_NUTRIENTS data intentionally contains a missing value ($np.nan$) to illustrate the use of a pipeline involving a data cleansing stage.
Step2: Creating a Spark session
Step3: Using the transformer with a Spark dataframe
In this section we show how to use a transformer with data stored in a Spark dataframe.
Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
Step4: Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the FOODS collection of tuples into columns
Step5: We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional min and max columns
Step6: Populate a Spark dataframe with the matrix data
In this section we build a Spark dataframe matrix to be passed to the transformer.
Using a Spark dataframe will also allow us to chain multiple transformers in a pipeline.
Step7: Let's display the dataframe schema and content
Step8: Solving the Diet problem with the $CplexRangeTransformer$ in a Pipeline
To use the transformer, create an instance and pass the following parameters to the transform method
- the X matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the Y cost vector (using "y" parameter id)
- whether one wants to solve a minimization (min) or maximization (max) problem (using "sense" parameter id)
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments
Step9: Example with CplexTransformer
To illustrate the usage of the $CplexTransformer$, let's remove the constraint on the minimum amount for nutrients, and reformulate the problem as a cost maximization.
First, let's define a new dataframe for the constraints matrix by removing the min column from the food_nutrients_df dataframe so that it is a well-formed input matrix for the $CplexTransformer$ | Python Code:
try:
import numpy as np
except ImportError:
raise RuntimError('This notebook requires numpy')
Explanation: Embedding CPLEX in a ML Spark Pipeline
Spark ML provides a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.
In this notebook, we show how to embed CPLEX as a Spark transformer class.
DOcplex provides transformer classes that take a matrix X of constraints and a vector y of costs and solves a linear problem using CPLEX.
Transformer classes share a solve(X, Y, **params) method which expects:
- an X matrix containing the constraints of the linear problem
- a Y vector containing the cost coefficients.
The transformer classes requires a Spark DataFrame for the 'X' matrix, and support various formats for the 'Y' vector:
Python lists,
numpy vector,
pandas Series,
Spark columns
The same formats are also supported to optionally specify upper bounds for decision variables.
DOcplex transformer classes
There are two DOcplex transformer classes:
$CplexTransformer$ expects to solve a linear problem in the classical form:
$$ minimize\ C^{t} x\ s.t.\
Ax <= B$$
Where $A$ is a (M,N) matrix describing the constraints and $B$ is a scalar vector of size M, containing the right hand sides of the constraints, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+1) matrix, where the last column contains the right hand sides.
$CplexRangeTransformer$ expects to solve linear problem as a set of range constraints:
$$ minimize\ C^{t} x\ s.t.\
m <= Ax <= M$$
Where $A$ is a (M,N) matrix describing the constraints, $m$ and $M$ are two scalar vectors of size M, containing the minimum and maximum values for the row expressions, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+2) matrix, where the last two columns contains the minimum and maximum values (in this order).
End of explanation
# the baseline diet data as Python lists of tuples.
FOODS = [
("Roasted Chicken", 0.84, 0, 10),
("Spaghetti W/ Sauce", 0.78, 0, 10),
("Tomato,Red,Ripe,Raw", 0.27, 0, 10),
("Apple,Raw,W/Skin", .24, 0, 10),
("Grapes", 0.32, 0, 10),
("Chocolate Chip Cookies", 0.03, 0, 10),
("Lowfat Milk", 0.23, 0, 10),
("Raisin Brn", 0.34, 0, 10),
("Hotdog", 0.31, 0, 10)
]
NUTRIENTS = [
("Calories", 2000, 2500),
("Calcium", 800, 1600),
("Iron", 10, 30),
("Vit_A", 5000, 50000),
("Dietary_Fiber", 25, 100),
("Carbohydrates", 0, 300),
("Protein", 50, 100)
]
FOOD_NUTRIENTS = [
("Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0.0, 0.0, 42.2),
("Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2),
("Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1.0),
("Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21.0, 0.3),
("Grapes", 15.1, 3.4, 0.1, 24.0, 0.2, 4.1, 0.2),
("Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0.0, 9.3, 0.9),
("Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0.0, 11.7, 8.1),
("Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4.0, 27.9, 4.0),
("Hotdog", 242.1, 23.5, 2.3, 0.0, 0.0, 18.0, 10.4)
]
nb_foods = len(FOODS)
nb_nutrients = len(NUTRIENTS)
print('#foods={0}'.format(nb_foods))
print('#nutrients={0}'.format(nb_nutrients))
assert nb_foods == len(FOOD_NUTRIENTS)
Explanation: In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
The FOOD_NUTRIENTS data intentionally contains a missing value ($np.nan$) to illustrate the use of a pipeline involving a data cleansing stage.
End of explanation
try:
import findspark
findspark.init()
except ImportError:
# Ignore exception: the 'findspark' module is required when executing Spark in a Windows environment
pass
import pyspark # Only run after findspark.init() (if running in a Windows environment)
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
Explanation: Creating a Spark session
End of explanation
mat_fn = np.matrix([FOOD_NUTRIENTS[f][1:] for f in range(nb_foods)])
print('The food-nutrient matrix has shape: {0}'.format(mat_fn.shape))
Explanation: Using the transformer with a Spark dataframe
In this section we show how to use a transformer with data stored in a Spark dataframe.
Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
End of explanation
nutrient_mins = [NUTRIENTS[n][1] for n in range(nb_nutrients)]
nutrient_maxs = [NUTRIENTS[n][2] for n in range(nb_nutrients)]
food_names ,food_costs, food_mins, food_maxs = map(list, zip(*FOODS))
Explanation: Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the FOODS collection of tuples into columns
End of explanation
# step 1. add two lines for nutrient mins, maxs
nf2 = np.append(mat_fn, np.matrix([nutrient_mins, nutrient_maxs]), axis=0)
mat_nf = nf2.transpose()
mat_nf.shape
Explanation: We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional min and max columns
End of explanation
from pyspark.sql import SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
columns = food_names + ['min', 'max']
food_nutrients_df = sqlContext.createDataFrame(mat_nf.tolist(), columns)
Explanation: Populate a Spark dataframe with the matrix data
In this section we build a Spark dataframe matrix to be passed to the transformer.
Using a Spark dataframe will also allow us to chain multiple transformers in a pipeline.
End of explanation
food_nutrients_df.printSchema()
food_nutrients_df.show()
Explanation: Let's display the dataframe schema and content
End of explanation
from docplex.mp.sparktrans.transformers import CplexRangeTransformer
from pyspark.ml import Pipeline
from pyspark.sql.functions import *
# Create the optimization transformer to calculate the optimal quantity for each food for a balanced diet.
cplexSolve = CplexRangeTransformer(minCol='min', maxCol='max', ubs=food_maxs)
# Make evaluation on input data. Additional parameters are specified using the 'params' dictionary
diet_df = cplexSolve.transform(food_nutrients_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'min'})
diet_df.orderBy(desc("value")).show()
Explanation: Solving the Diet problem with the $CplexRangeTransformer$ in a Pipeline
To use the transformer, create an instance and pass the following parameters to the transform method
- the X matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the Y cost vector (using "y" parameter id)
- whether one wants to solve a minimization (min) or maximization (max) problem (using "sense" parameter id)
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments:
ubs denotes the upper bound for the column variables that are created. The expected size of this scalar vector is N (when matrix has size (M,N+2))
minCol and maxCol are the names of the columns corresponding to the constraints min and max range in the X matrix
End of explanation
food_nutrients_LP_df = food_nutrients_df.select([item for item in food_nutrients_df.columns if item not in ['min']])
food_nutrients_LP_df.show()
from docplex.mp.sparktrans.transformers import CplexTransformer
# Create the optimization transformer to calculate the optimal quantity for each food for a balanced diet.
# Here, let's use the CplexTransformer by specifying only a maximum amount for each nutrient.
cplexSolve = CplexTransformer(rhsCol='max', ubs=food_maxs)
# Make evaluation on input data. Additional parameters are specified using the 'params' dictionary
# Since there is no lower range for decision variables, let's maximize cost instead! (otherwise, the result is all 0's)
diet_max_cost_df = cplexSolve.transform(food_nutrients_LP_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'max'})
diet_max_cost_df.orderBy(desc("value")).show()
%matplotlib inline
import matplotlib.pyplot as plt
def plot_radar_chart(labels, stats, **kwargs):
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
# close the plot
stats = np.concatenate((stats, [stats[0]]))
angles = np.concatenate((angles, [angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2, **kwargs)
ax.fill(angles, stats, alpha=0.30, **kwargs)
ax.set_thetagrids(angles * 180/np.pi, labels)
#ax.set_title([df.loc[386,"Name"]])
ax.grid(True)
diet = diet_df.toPandas()
plot_radar_chart(labels=diet['name'], stats=diet['value'], color='r')
diet_max_cost = diet_max_cost_df.toPandas()
plot_radar_chart(labels=diet_max_cost['name'], stats=diet_max_cost['value'], color='r')
Explanation: Example with CplexTransformer
To illustrate the usage of the $CplexTransformer$, let's remove the constraint on the minimum amount for nutrients, and reformulate the problem as a cost maximization.
First, let's define a new dataframe for the constraints matrix by removing the min column from the food_nutrients_df dataframe so that it is a well-formed input matrix for the $CplexTransformer$:
End of explanation |
14,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 08 - Non linear Parabolic problem
Keywords
Step1: 3. Affine Decomposition
We set the variables $u
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element space (Lagrange P1)
Step4: 4.3. Allocate an object of the FitzHughNagumo class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from dolfin import *
from rbnics import *
from utils import *
Explanation: Tutorial 08 - Non linear Parabolic problem
Keywords: exact parametrized functions, POD-Galerkin
1. Introduction
In this tutorial, we consider the FitzHugh-Nagumo (F-N) system. The F-N system is used to describe neuron excitable systems. The nonlinear parabolic problem for the F-N system is defined on the interval $I=[0,L]$. Let $x\in I$, $t\geq0$
$$\begin{cases}
\varepsilon u_t(x,t) =\varepsilon^2u_{xx}(x,t)+g(u(x,t))-\omega(x,t)+c, & x\in I,\quad t\geq 0, \
\omega_t(x,t) =bu(x,t)-\gamma\omega(x,t)+c, & x\in I,\quad t\geq 0, \
u(x,0) = 0,\quad\omega(x,0)=0, & x\in I, \
u_x(0,t)=-i_0(t),\quad u_x(L,t)=0, & t\geq 0,
\end{cases}$$
where the nonlinear function is defined by
$$g(u) = u(u-0.1)(1-u)$$
and the parameters are given by $L = 1$, $\varepsilon = 0.015$, $b = 0.5$, $\gamma = 2$, and $c = 0.05$. The stimulus $i_0(t)=50000t^3\exp(-15t)$. The variables $u$ and $\omega$ represent the $\textit{voltage}$ and the $\textit{recovery of voltage}$, respectively.
In order to obtain an exact solution of the problem we pursue a model reduction by means of a POD-Galerkin reduced order method.
2. Formulation for the F-N system
Let $u,\omega$ the solutions in the domain $I$.
For this problem we want to find $\boldsymbol{u}=(u,\omega)$ such that
$$
m\left(\partial_t\boldsymbol{u}(t),\boldsymbol{v}\right)+a\left(\boldsymbol{u}(t),\boldsymbol{v}\right)+c\left(u(t),v\right)=f(\boldsymbol{v})\quad \forall \boldsymbol{v}=(v,\tilde{v}), \text{ with }v,\tilde{v} \in\mathbb{V},\quad\forall t\geq0
$$
where
the function space $\mathbb{V}$ is defined as
$$
\mathbb{V} = {v\in L^2(I) : v|_{{0}}=0}
$$
the bilinear form $m(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$m(\partial\boldsymbol{u}(t), \boldsymbol{v})=\varepsilon\int_{I}\frac{\partial u}{\partial t}v \ d\boldsymbol{x} \ + \ \int_{I}\frac{\partial\omega}{\partial t}\tilde{v} \ d\boldsymbol{x},$$
the bilinear form $a(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$a(\boldsymbol{u}(t), \boldsymbol{v})=\varepsilon^2\int_{I} \nabla u\cdot \nabla v \ d\boldsymbol{x}+\int_{I}\omega v \ d\boldsymbol{x} \ - \ b\int_{I} u\tilde{v} \ d\boldsymbol{x}+\gamma\int_{I}\omega\tilde{v} \ d\boldsymbol{x},$$
the bilinear form $c(\cdot, \cdot): \mathbb{V} \times \mathbb{V} \to \mathbb{R}$ is defined by
$$c(u, v)=-\int_{I} g(u)v \ d\boldsymbol{x},$$
the linear form $f(\cdot): \mathbb{V} \to \mathbb{R}$ is defined by
$$f(\boldsymbol{v})= c\int_{I}\left(v+\tilde{v}\right) \ d\boldsymbol{x} \ + \ \varepsilon^2i_0(t)\int_{{0}}v \ d\boldsymbol{s}.$$
The output of interest $s(t)$ is given by
$$s(t) = c\int_{I}\left[u(t)+\omega(t)\right] \ d\boldsymbol{x} \ + \ \varepsilon^2i_0(t)\int_{{0}}u(t) \ d\boldsymbol{s} $$.
End of explanation
@ExactParametrizedFunctions()
class FitzHughNagumo(NonlinearParabolicProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NonlinearParabolicProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
self.du = TrialFunction(V)
(self.du1, self.du2) = split(self.du)
self.u = self._solution
(self.u1, self.u2) = split(self.u)
self.v = TestFunction(V)
(self.v1, self.v2) = split(self.v)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
# Problem coefficients
self.epsilon = 0.015
self.b = 0.5
self.gamma = 2
self.c = 0.05
self.i0 = lambda t: 50000 * t**3 * exp(-15 * t)
self.g = lambda v: v * (v - 0.1) * (1 - v)
# Customize time stepping parameters
self._time_stepping_parameters.update({
"report": True,
"snes_solver": {
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
}
})
# Return custom problem name
def name(self):
return "FitzHughNagumoExact"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
def compute_theta(self, term):
if term == "m":
theta_m0 = self.epsilon
theta_m1 = 1.
return (theta_m0, theta_m1)
elif term == "a":
theta_a0 = self.epsilon**2
theta_a1 = 1.
theta_a2 = - self.b
theta_a3 = self.gamma
return (theta_a0, theta_a1, theta_a2, theta_a3)
elif term == "c":
theta_c0 = - 1.
return (theta_c0,)
elif term == "f":
t = self.t
theta_f0 = self.c
theta_f1 = self.epsilon**2 * self.i0(t)
return (theta_f0, theta_f1)
elif term == "s":
t = self.t
theta_s0 = self.c
theta_s1 = self.epsilon**2 * self.i0(t)
return (theta_s0, theta_s1)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
def assemble_operator(self, term):
(v1, v2) = (self.v1, self.v2)
dx = self.dx
if term == "m":
(u1, u2) = (self.du1, self.du2)
m0 = u1 * v1 * dx
m1 = u2 * v2 * dx
return (m0, m1)
elif term == "a":
(u1, u2) = (self.du1, self.du2)
a0 = inner(grad(u1), grad(v1)) * dx
a1 = u2 * v1 * dx
a2 = u1 * v2 * dx
a3 = u2 * v2 * dx
return (a0, a1, a2, a3)
elif term == "c":
u1 = self.u1
c0 = self.g(u1) * v1 * dx
return (c0,)
elif term == "f":
ds = self.ds
f0 = v1 * dx + v2 * dx
f1 = v1 * ds(1)
return (f0, f1)
elif term == "s":
(v1, v2) = (self.v1, self.v2)
ds = self.ds
s0 = v1 * dx + v2 * dx
s1 = v1 * ds(1)
return (s0, s1)
elif term == "inner_product":
(u1, u2) = (self.du1, self.du2)
x0 = inner(grad(u1), grad(v1)) * dx + u2 * v2 * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NonlinearParabolicProblem)
def CustomizeReducedNonlinearParabolic(ReducedNonlinearParabolic_Base):
class ReducedNonlinearParabolic(ReducedNonlinearParabolic_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNonlinearParabolic_Base.__init__(self, truth_problem, **kwargs)
self._time_stepping_parameters.update({
"report": True,
"nonlinear_solver": {
"report": True,
"line_search": "wolfe"
}
})
return ReducedNonlinearParabolic
Explanation: 3. Affine Decomposition
We set the variables $u:=u_1$, $\omega:=u_2$ and the test functions $v:=v_1$, $\tilde{v}:=v_2$.
For this problem the affine decomposition is straightforward:
$$m(\boldsymbol{u},\boldsymbol{v})=\underbrace{\varepsilon}{\Theta^{m}_0}\underbrace{\int{I}u_1v_1 \ d\boldsymbol{x}}{m_0(u_1,v_1)} \ + \ \underbrace{1}{\Theta^{m}1}\underbrace{\int{I}u_2v_2 \ d\boldsymbol{x}}{m_1(u_2,v_2)},$$
$$a(\boldsymbol{u},\boldsymbol{v})=\underbrace{\varepsilon^2}{\Theta^{a}0}\underbrace{\int{I}\nabla u_1 \cdot \nabla v_1 \ d\boldsymbol{x}}{a_0(u_1,v_1)} \ + \ \underbrace{1}{\Theta^{a}1}\underbrace{\int{I}u_2v_1 \ d\boldsymbol{x}}{a_1(u_2,v_1)} \ + \ \underbrace{-b}{\Theta^{a}2}\underbrace{\int{I}u_1v_2 \ d\boldsymbol{x}}{a_2(u_1,v_2)} \ + \ \underbrace{\gamma}{\Theta^{a}3}\underbrace{\int{I}u_2v_2 \ d\boldsymbol{x}}{a_3(u_2,v_2)},$$
$$c(u,v)=\underbrace{-1}{\Theta^{c}0}\underbrace{\int{I}g(u_1)v_1 \ d\boldsymbol{x}}{c_0(u_1,v_1)},$$
$$f(\boldsymbol{v}) = \underbrace{c}{\Theta^{f}0} \underbrace{\int{I}(v_1 + v_2) \ d\boldsymbol{x}}{f_0(v_1,v_2)} \ + \ \underbrace{\varepsilon^2i_0(t)}{\Theta^{f}1} \underbrace{\int{{0}} v_1 \ d\boldsymbol{s}}{f_1(v_1)}.$$
We will implement the numerical discretization of the problem in the class
class FitzHughNagumo(NonlinearParabolicProblem):
by specifying the coefficients $\Theta^{m}$, $\Theta^{a}_$, $\Theta^{c}$ and $\Theta^{f}_$ in the method
def compute_theta(self, term):
and the bilinear forms $m(\boldsymbol{u}, \boldsymbol{v})$, $a_(\boldsymbol{u}, \boldsymbol{v})$, $c_(u, v)$ and linear forms $f_(\boldsymbol{v})$ in
def assemble_operator(self, term):
End of explanation
mesh = Mesh("data/interval.xml")
subdomains = MeshFunction("size_t", mesh, "data/interval_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/interval_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
V = VectorFunctionSpace(mesh, "Lagrange", 1, dim=2)
Explanation: 4.2. Create Finite Element space (Lagrange P1)
End of explanation
problem = FitzHughNagumo(V, subdomains=subdomains, boundaries=boundaries)
mu_range = []
problem.set_mu_range(mu_range)
problem.set_time_step_size(0.02)
problem.set_final_time(8)
Explanation: 4.3. Allocate an object of the FitzHughNagumo class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(20)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
reduction_method.initialize_training_set(1)
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
solution_over_time = problem.solve()
reduced_solution_over_time = reduced_problem.solve()
print(reduced_problem.compute_output())
basis_functions = reduced_problem.basis_functions
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.0)
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.1)
plot_phase_space(solution_over_time, reduced_solution_over_time, basis_functions, 0.5)
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(1)
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
14,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Regularised Linear Regression
1.1 Data Extraction and Transformation
Step1: 1.2 Data Visualisation
Step2: 1.2.1 Training Set
Step3: 1.2.2 Validation Set
Step4: 1.2.3 Test Set
Step5: 1.3 Regularised Linear Regression
Hypothesis $h_{\beta}(X) = X\cdot\beta$
Error $e = (h_{\beta}(X) - y)$
Cost Function $J = \frac{1}{2n}{\sum(h_{\beta} - y)^2}$
Regularisation Term $R = \frac{\lambda}{2n}{\sum{\beta}^2}$
Regularised Cost $J = \frac{1}{2n}{\sum(h_{\beta} - y)^2} + \frac{\lambda}{2n}{\sum{\beta}^2}$
Gradient $\frac{\partial J}{\partial \beta 0} = \frac{1}{n}X^{T}\cdot e$<br>
$\frac{\partial J}{\partial \beta {≠0}} = \frac{1}{n}X^{T}\cdot e + \frac{\lambda}{n}\beta$<br>
In the code $\frac{\partial J}{\partial \beta}$ is denoted symply as g.
Step6: Function Test
For the trainin set and the $\β$-vector set to ones the output of the functions should be as follows | Python Code:
def get_data(file_path, xLabel, yLabel):
data = loadmat(file_path)
X = np.insert(data[xLabel], 0, 1, axis=1)
n_samples, n_variables = X.shape
y = data[yLabel]
return X.flatten(), y.flatten(), n_samples, n_variables
def get_β(n_variables):
β = np.zeros(n_variables)
return β
Explanation: 1 Regularised Linear Regression
1.1 Data Extraction and Transformation
End of explanation
def visualiseData(file_path, xLabel, yLabel, title):
data = loadmat(file_path)
plt.plot(data[xLabel], data[yLabel], 'o')
plt.xlabel("Change in water level (x)")
plt.ylabel("Water flowing out of the dam (y)")
plt.title(title)
return plt.show()
Explanation: 1.2 Data Visualisation
End of explanation
visualiseData(file_path, 'X', 'y', 'Traning Data Set')
Explanation: 1.2.1 Training Set
End of explanation
visualiseData(file_path, 'Xval', 'yval', 'Cross Validation Data Set')
Explanation: 1.2.2 Validation Set
End of explanation
visualiseData(file_path, 'Xtest', 'ytest', 'Test Data Set')
Explanation: 1.2.3 Test Set
End of explanation
def get_hypothesis(β, X, n_samples, n_variables):
β = β.reshape(n_variables, -1)
X = X.reshape(n_samples, -1)
# return hypothesis vector h(n, 1), where n is n_samples
return np.dot(X, β)
def cost_function(β, X, y, n_samples, n_variables, λ=0.):
β = β.reshape(n_variables, -1)
X = X.reshape(n_samples, -1)
y = y.reshape(n_samples, -1)
# hypothesis vector h(n, 1)
h = get_hypothesis(β, X, n_samples, n_variables)
# cost scalar J(1, 1); technically the result is an array (1,1) rather than a float
J = np.dot((y-h).T, y-h)/(2*n_samples)
# similarly cost J can be calculated using np.sum
# J = np.sum((y-h)**2)/(2*n_samples)
R = λ*np.dot(β.T, β)/(2*n_samples)
return (J + R)[0][0]
def get_gradient(β, X, y, n_samples, n_variables, λ=0.):
β = β.reshape(n_variables, -1)
X = X.reshape(n_samples, -1)
y = y.reshape(n_samples, -1)
# hypothesis vector h(n, 1)
h = get_hypothesis(β, X, n_samples, n_variables)
# error vector e(n, 1) = h(n, 1) - y(n, 1)
e = h - y
# gradient vector g(k, 1) = X(n, k).T*e(n, 1)*
g = np.dot(X.T,e)/(n_samples)
# regularisation term vector (r(400x1)) — derivative of the regularisation term of the cost funtion
r = β[1:]*(λ/n_samples)
g[1:] = g[1:] + r
return g.flatten()
def plot_regression(β, X, y, n_samples, n_variables):
β = β.reshape(n_variables, -1)
X = X.reshape(n_samples, -1)
y = y.reshape(n_samples, -1)
y_fit = np.dot(X, β)
MSE = np.sum((y - y_fit)**2)/y.shape[0]
plt.plot(X[:,1:], y, 'o', X[:,1:], y_fit, '-')
plt.xlabel("X")
plt.ylabel("Y")
print ("β_0:", β[0][0],
"\nβ_1:", β[1][0],
"\nRegression: Y =", '{:10.2f}'.format(β[0][0]), '+', '{:10.2f}'.format(β[1][0]), "X"
"\nMSE =",'{:10.2f}'.format(MSE))
return plt.show()
Explanation: 1.3 Regularised Linear Regression
Hypothesis $h_{\beta}(X) = X\cdot\beta$
Error $e = (h_{\beta}(X) - y)$
Cost Function $J = \frac{1}{2n}{\sum(h_{\beta} - y)^2}$
Regularisation Term $R = \frac{\lambda}{2n}{\sum{\beta}^2}$
Regularised Cost $J = \frac{1}{2n}{\sum(h_{\beta} - y)^2} + \frac{\lambda}{2n}{\sum{\beta}^2}$
Gradient $\frac{\partial J}{\partial \beta 0} = \frac{1}{n}X^{T}\cdot e$<br>
$\frac{\partial J}{\partial \beta {≠0}} = \frac{1}{n}X^{T}\cdot e + \frac{\lambda}{n}\beta$<br>
In the code $\frac{\partial J}{\partial \beta}$ is denoted symply as g.
End of explanation
X, y, n_samples, n_variables = get_data(file_path, 'X', 'y')
β = get_β(n_variables)
βOnes = np.ones(n_variables)
# print("hypothesis =", get_hypothesis(β_flatOnes, X_flat, n_samples, n_variables))
J = cost_function(βOnes, X, y, n_samples, n_variables, λ=0.)
print(f"J = {J}")
gradient = get_gradient(βOnes, X, y, n_samples, n_variables, λ=0.)
print(f"gradient = {gradient}")
def optimise_β(β_flat, X_flat, Y_flat, n_samples, n_variables, λ=0.):
β_optimisation = optimize.minimize(cost_function, β_flat,
args=(X_flat, Y_flat, n_samples, n_variables, λ),
method=None, jac=get_gradient, options={'maxiter':50})
β_opt = β_optimisation['x']
# β_optimisation = optimize.fmin_cg(cost_function, fprime=gradient, x0=β_flat,
# args=(X_flat, Y_flat, n_samples, n_variables, λ),
# maxiter=50, disp=False, full_output=True)
# β_flat = β_optimisation[0]
return β_opt
β_opt = optimise_β(β, X, y, n_samples, n_variables)
print (f"optimised β {β_opt}")
plot_regression(β_opt, X, y, n_samples, n_variables)
X, y, n_samples, n_variables = get_data(file_path, 'X', 'y')
X_val, y_val, n_samples_val, n_variables_val = get_data(file_path, 'Xval', 'yval')
β = get_β(n_variables)
J_test = []
J_val = []
for i in range(n_samples):
# np.random.seed(0)
# indexSet = np.random.choice(n_samples, i+1, replace=False)
# subsetX = reshapeT(X, n_samples)[indexSet]
# subsetY = reshapeT(y, n_samples)[indexSet]
subsetX = X.reshape(n_samples, -1)[:i+1,:]
subsetY = y.reshape(n_samples, -1)[:i+1]
flatSubsetX = subsetX.flatten()
flatSubsetY = subsetY.flatten()
β_fit = optimise_β(β, flatSubsetX, flatSubsetY, i+1, n_variables)
y_fit_test = np.dot(subsetX, β_fit)
J_test += [cost_function(β_fit, flatSubsetX, flatSubsetY, i+1, n_variables, λ=0.)]
y_fit_val = np.dot(X_val.reshape(n_samples_val, -1), β_fit)
J_val += [cost_function(β_fit, X_val, y_val, n_samples_val, n_variables_val, λ=0.)]
plt.plot(range(1,n_samples + 1), J_test, '-', label='Training Set')
plt.plot(range(1,n_samples + 1), J_val, '-', label='Cross-Validation Set')
plt.xlabel("Training-Set Size")
plt.ylabel("J")
plt.title("Linear-Regression Learning Curve")
plt.legend()
plt.show()
def polynomialsANDinteractions(file_path, xLabel, yLabel, polynomialDegree):
data = loadmat(file_path)
X = data[xLabel]
y = data[yLabel]
poly = PolynomialFeatures(polynomialDegree)
poly_X = poly.fit_transform(X)
n_samples, n_variables = poly_X.shape
return poly_X.flatten(), y.flatten(), n_samples, n_variables
def normalise(X, n_samples):
normalisedX = X.reshape(n_samples, -1).copy()
for i in (range(normalisedX.shape[1])):
if np.std(normalisedX[:,i]) != 0:
normalisedX[:,i] = (normalisedX[:,i] - np.mean(normalisedX[:,i]))/np.std(normalisedX[:,i])
return normalisedX.flatten()
polynomialDegree = 8
poly_X, y, polyn_samples, polyn_variables = polynomialsANDinteractions(file_path, 'X', 'y', polynomialDegree)
poly_β = get_β(polyn_variables)
normPolyX = normalise(poly_X, polyn_samples)
normY = normalise(y, polyn_samples)
print(polyn_samples)
print(polyn_variables)
print(np.max(normPolyX))
β_opt_poly = optimise_β(poly_β, normPolyX, normY, polyn_samples, polyn_variables)
print (β_opt_poly)
# def plot_regression(β, X, y, n_samples, n_variables):
poly_β = poly_β.reshape(polyn_variables, -1)
poly_X = poly_X.reshape(n_samples, -1)
y = normY.reshape(n_samples, -1)
y_fit = np.dot(poly_X, poly_β)
MSE = np.sum((y - y_fit)**2)/y.shape[0]
plt.plot(poly_X[:,1:2], y_fit, 'o')
# plt.plot(poly_X[:,1:2], y, 'o', X[:,1:2], y_fit, '-')
# plt.xlabel("X")
# plt.ylabel("Y")
# print ("β_0:", β[0][0],
# "\nβ_1:", β[1][0],
# "\nRegression: Y =", '{:10.2f}'.format(β[0][0]), '+', '{:10.2f}'.format(β[1][0]), "X"
# "\nMSE =",'{:10.2f}'.format(MSE))
plt.show()
Explanation: Function Test
For the trainin set and the $\β$-vector set to ones the output of the functions should be as follows:<br>
cost_function — J = 303.951525554<br>
gradient — gradient = [ -15.30301567 598.16741084]
End of explanation |
14,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training deep neural networks
@cesans
Step1: Loading data
Previously generated trajectories can be loaded with dc.data.load_trajectories
Step2: Training
From the trajectories we can generate the training sets
Step3: We specify a model to train | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.append('..')
import numpy as np
import deep_control as dc
Explanation: Training deep neural networks
@cesans
End of explanation
# The time column is automatically discarded
# For free landing we drop the 'x' column
col_names = ['t', 'm', #'x',
'vx', 'z', 'vz', 'theta', 'vtheta', 'T', 'Tl', 'Tr']
cols = [0,1, #2,
3,4,5,6,7,8,9,10]
trajs = dc.data.load_trajectories('data/main_thrusters/', col_names=col_names, cols=cols)
Explanation: Loading data
Previously generated trajectories can be loaded with dc.data.load_trajectories
End of explanation
train_p = 0.9 # proportion of training data
x_train, y_train, x_test, y_test, idx_train = dc.data.create_training_data(trajs, train_p = train_p, n_outputs=3)
dc.nn.save_training_data([x_train, y_train, x_test, y_test, idx_train], "mass_thrusters")
Explanation: Training
From the trajectories we can generate the training sets:
End of explanation
model_description = {"data": "mass_thrusters",
"control": dc.nn.THRUST,
"nlayers": 3,
"units": 128,
"output_mode": dc.nn.OUTPUT_LOG,
"dropout": False,
"batch_size": 8,
"epochs": 32,
"lr": 0.001,
"input_vars" : 6,
"hidden_nonlinearity": "ReLu"}
dc.nn.train(model_description)
Explanation: We specify a model to train
End of explanation |
14,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alternative libraries
Before this, the two main libraries used for scraping a webpage were requests and BeautifulSoup. However, there ar ealso alternative libraries that can serve the same purpose.
urllib2 - the standard Python library for sending requests to URL and reading the HTML content. The two main functions are urlopen() (similar to get() from requests) and read() (similar to text from requests)
lxml - a third party library (like BeautifulSoup) used for parsing xml and html files. The syntax is very similar to that of BeautifulSoup yet this library is much faster. The disadvantage is that it best suits for standard webpages, not for more or less unstructured ones (not for soups).
<blockquote>
It is worth to note, that **lxml** has a soupparser method (**lxml.html.soupparser**), which *"mimics"* the **BeautifulSoup** approach. At the same time, **BeautifulSoup()** functino from the samename library can take **lxml** as an argument and use the latter as a parser to scrape the websites more quickly.
</blockquote>
Step1: The findAll() function from BeautifulSoup is replaced by cssselect() in lxml, which finds all the tags given inside quotes as follows.
Step2: To get the text content of the tag the text_content() function should be used on an element of the list.
Step3: We may use the table attributes to find the correct table that we are looking for. Multiple attributes can be listed one by one each inside square brackets and separated by a comma as follows
Step4: To get the text content of each table, we should create a for loop that will iterate over the list of tables and provide us with the text content.
Step5: One thing that can be considered as an advantage to the lxml library is that it provides two options for scraping
Step6: To find the table that has a border argument with a value of 0, the following approach should be used.
Step7: If one is interested in getting the value of an attibute (similar to get() in BeautifulSoup), then @ without square brackets can be used after the / sign as follows | Python Code:
import urllib2
from lxml import html
url = "https://careercenter.am/ccidxann.php"
response = urllib2.urlopen(url)
page = response.read()
tree = html.document_fromstring(page)
Explanation: Alternative libraries
Before this, the two main libraries used for scraping a webpage were requests and BeautifulSoup. However, there ar ealso alternative libraries that can serve the same purpose.
urllib2 - the standard Python library for sending requests to URL and reading the HTML content. The two main functions are urlopen() (similar to get() from requests) and read() (similar to text from requests)
lxml - a third party library (like BeautifulSoup) used for parsing xml and html files. The syntax is very similar to that of BeautifulSoup yet this library is much faster. The disadvantage is that it best suits for standard webpages, not for more or less unstructured ones (not for soups).
<blockquote>
It is worth to note, that **lxml** has a soupparser method (**lxml.html.soupparser**), which *"mimics"* the **BeautifulSoup** approach. At the same time, **BeautifulSoup()** functino from the samename library can take **lxml** as an argument and use the latter as a parser to scrape the websites more quickly.
</blockquote>
End of explanation
tables = tree.cssselect("table")
len(tables)
Explanation: The findAll() function from BeautifulSoup is replaced by cssselect() in lxml, which finds all the tags given inside quotes as follows.
End of explanation
tables[-1].text_content()
Explanation: To get the text content of the tag the text_content() function should be used on an element of the list.
End of explanation
our_table = tree.cssselect('[width="100%"],[border="0"]')
Explanation: We may use the table attributes to find the correct table that we are looking for. Multiple attributes can be listed one by one each inside square brackets and separated by a comma as follows:
End of explanation
for i in our_table:
print(i.text_content())
Explanation: To get the text content of each table, we should create a for loop that will iterate over the list of tables and provide us with the text content.
End of explanation
tree.xpath('//table')[-1].text_content()
Explanation: One thing that can be considered as an advantage to the lxml library is that it provides two options for scraping: 1) CSS selectors (similar to BeautifulSoup) and 2) XPath. The latter is not supported by BeautifulSoup, yet sometimes may be quite handy. XPath is the navigation tool for the XML files (that lxml is meant for worknig with). To work with XPath one needs to use the forward slash sign (/) to define address and the "dog" sign (@) inside square brackets ([ ]) to define an attibute. To look for the very first table //table path can be used.
End of explanation
tree.xpath('//table[@border="0"]')[-1].text_content()
Explanation: To find the table that has a border argument with a value of 0, the following approach should be used.
End of explanation
tree.xpath('//table/@border')
Explanation: If one is interested in getting the value of an attibute (similar to get() in BeautifulSoup), then @ without square brackets can be used after the / sign as follows:
End of explanation |
14,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming Assignment
Step1: Составление корпуса
Step2: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов
Step3: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос
Step4: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон
Step5: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины
Step6: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
Step7: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели
Step8: Также выведите содержимое переменной .alpha второй модели
Step9: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель
Step10: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
Step11: Для такого большого количества классов это неплохая точность. Вы можете попробовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$
Step12: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn. | Python Code:
import json
with open("recipes.json") as f:
recipes = json.load(f)
print(recipes[0])
Explanation: Programming Assignment:
Готовим LDA по рецептам
Как вы уже знаете, в тематическом моделировании делается предположение о том, что для определения тематики порядок слов в документе не важен; об этом гласит гипотеза «мешка слов». Сегодня мы будем работать с несколько нестандартной для тематического моделирования коллекцией, которую можно назвать «мешком ингредиентов», потому что на состоит из рецептов блюд разных кухонь. Тематические модели ищут слова, которые часто вместе встречаются в документах, и составляют из них темы. Мы попробуем применить эту идею к рецептам и найти кулинарные «темы». Эта коллекция хороша тем, что не требует предобработки. Кроме того, эта задача достаточно наглядно иллюстрирует принцип работы тематических моделей.
Для выполнения заданий, помимо часто используемых в курсе библиотек, потребуются модули json и gensim. Первый входит в дистрибутив Anaconda, второй можно поставить командой
pip install gensim
Построение модели занимает некоторое время. На ноутбуке с процессором Intel Core i7 и тактовой частотой 2400 МГц на построение одной модели уходит менее 10 минут.
Загрузка данных
Коллекция дана в json-формате: для каждого рецепта известны его id, кухня (cuisine) и список ингредиентов, в него входящих. Загрузить данные можно с помощью модуля json (он входит в дистрибутив Anaconda):
End of explanation
from gensim import corpora, models
import numpy as np
Explanation: Составление корпуса
End of explanation
texts = [recipe["ingredients"] for recipe in recipes]
dictionary = corpora.Dictionary(texts) # составляем словарь
corpus = [dictionary.doc2bow(text) for text in texts] # составляем корпус документов
print(texts[0])
print(corpus[0])
Explanation: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов:
[["hello", "world"], ["programming", "in", "python"]]
Преобразуем наши данные в такой формат, а затем создадим объекты corpus и dictionary, с которыми будет работать модель.
End of explanation
np.random.seed(76543)
# здесь код для построения модели:
lda_1 = models.LdaModel(corpus, id2word=dictionary, num_topics=40, passes=5)
topics = lda_1.show_topics(num_topics=40, num_words=10, formatted=False)
c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs = 0, 0, 0, 0, 0, 0
for _, top_words in lda_1.print_topics(num_topics=40, num_words=10):
c_salt += top_words.count(u'salt')
c_sugar += top_words.count(u'sugar')
c_water += top_words.count(u'water')
c_mushrooms += top_words.count(u'mushrooms')
c_chicken += top_words.count(u'chicken')
c_eggs += top_words.count(u'eggs')
def save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs):
with open("cooking_LDA_pa_task1.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs]]))
print(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs)
save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs)
Explanation: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос:
Сколько раз ингредиенты "salt", "sugar", "water", "mushrooms", "chicken", "eggs" встретились среди топов-10 всех 40 тем? При ответе не нужно учитывать составные ингредиенты, например, "hot water".
Передайте 6 чисел в функцию save_answers1 и загрузите сгенерированный файл в форму.
У gensim нет возможности фиксировать случайное приближение через параметры метода, но библиотека использует numpy для инициализации матриц. Поэтому, по утверждению автора библиотеки, фиксировать случайное приближение нужно командой, которая написана в следующей ячейке. Перед строкой кода с построением модели обязательно вставляйте указанную строку фиксации random.seed.
End of explanation
import copy
dictionary2 = copy.deepcopy(dictionary)
Explanation: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон: убирают очень редкие слова (в целях экономии памяти) и очень частые слова (в целях повышения интерпретируемости тем). Мы уберем только частые слова.
End of explanation
frequent_words = list()
for el in dictionary2.dfs:
if dictionary2.dfs[el] > 4000:
frequent_words.append(el)
print(frequent_words)
dict_size_before = len(dictionary2.dfs)
dictionary2.filter_tokens(frequent_words)
dict_size_after = len(dictionary2.dfs)
corpus2 = [dictionary2.doc2bow(text) for text in texts]
corpus_size_before = 0
for i in corpus:
corpus_size_before += len(i)
corpus_size_after = 0
for i in corpus2:
corpus_size_after += len(i)
def save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after):
with open("cooking_LDA_pa_task2.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [dict_size_before, dict_size_after, corpus_size_before, corpus_size_after]]))
print(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)
save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)
Explanation: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины: dict_size_before и dict_size_after — размер словаря до и после фильтрации.
Затем, используя новый словарь, создайте новый корпус документов, corpus2, по аналогии с тем, как это сделано в начале ноутбука. Вычислите две величины: corpus_size_before и corpus_size_after — суммарное количество ингредиентов в корпусе (для каждого документа вычислите число различных ингредиентов в нем и просуммируйте по всем документам) до и после фильтрации.
Передайте величины dict_size_before, dict_size_after, corpus_size_before, corpus_size_after в функцию save_answers2 и загрузите сгенерированный файл в форму.
End of explanation
np.random.seed(76543)
lda_2 = models.LdaModel(corpus2, id2word=dictionary2, num_topics = 40, passes = 5)
top_topics_1 = lda_1.top_topics(corpus)
top_topics_2 = lda_2.top_topics(corpus2)
def topics_mean(all_topics):
return np.mean([one_topics[1] for one_topics in all_topics])
coherence_1 = topics_mean(top_topics_1)
coherence_2 = topics_mean(top_topics_2)
def save_answers3(coherence_1, coherence_2):
with open("cooking_LDA_pa_task3.txt", "w") as fout:
fout.write(" ".join(["%3f"%el for el in [coherence_1, coherence_2]]))
print(coherence_1, coherence_2)
save_answers3(coherence_1, coherence_2)
Explanation: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
End of explanation
lda_1.get_document_topics(corpus2[0])
Explanation: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели:
End of explanation
lda_1.alpha
Explanation: Также выведите содержимое переменной .alpha второй модели:
End of explanation
np.random.seed(76543)
lda_3 = models.ldamodel.LdaModel(corpus2, id2word=dictionary2, num_topics=40, passes=5, alpha = 1)
lda_3.get_document_topics(corpus2[0])
def sum_doc_topics(model, corpus):
return sum([len(model.get_document_topics(i, minimum_probability=0.01)) for i in corpus])
count_lda_2 = sum_doc_topics(lda_2,corpus2)
count_lda_3 = sum_doc_topics(lda_3,corpus2)
def save_answers4(count_model_2, count_model_3):
with open("cooking_LDA_pa_task4.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [count_model_2, count_model_3]]))
print(count_lda_2, count_lda_3)
save_answers4(count_lda_2, count_lda_3)
Explanation: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель: используйте сокращенный корпус (corpus2 и dictionary2) и установите параметр alpha=1, passes=5. Не забудьте про фиксацию seed! Выведите темы новой модели для нулевого документа; должно получиться, что распределение над множеством тем практически равномерное. Чтобы убедиться в том, что во второй модели документы описываются гораздо более разреженными распределениями, чем в третьей, посчитайте суммарное количество элементов, превосходящих 0.01, в матрицах темы-документы обеих моделей. Другими словами, запросите темы модели для каждого документа с параметром minimum_probability=0.01 и просуммируйте число элементов в получаемых массивах. Передайте две суммы (сначала для модели с alpha по умолчанию, затем для модели в alpha=1) в функцию save_answers4.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
X = np.zeros((len(recipes), 40))
y = [recipe['cuisine'] for recipe in recipes]
for i in range(len(recipes)):
for top in lda_2.get_document_topics(corpus2[i]):
X[i, top[0]] = top[1]
RFC = RandomForestClassifier(n_estimators = 100)
estimator = cross_val_score(RFC, X, y, cv=3).mean()
def save_answers5(accuracy):
with open("cooking_LDA_pa_task5.txt", "w") as fout:
fout.write(str(accuracy))
print(estimator)
save_answers5(estimator)
Explanation: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
End of explanation
def generate_recipe(model, num_ingredients):
theta = np.random.dirichlet(model.alpha)
for i in range(num_ingredients):
t = np.random.choice(np.arange(model.num_topics), p=theta)
topic = model.show_topic(t, topn=model.num_terms)
topic_distr = [x[1] for x in topic]
terms = [x[0] for x in topic]
w = np.random.choice(terms, p=topic_distr)
print w
print(generate_recipe(lda_1, 5))
print('\n')
print(generate_recipe(lda_2, 5))
print('\n')
print(generate_recipe(lda_3, 5))
Explanation: Для такого большого количества классов это неплохая точность. Вы можете попробовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$:
1. Из априорного распределения Дирихле с параметром alpha сгенерировать распределение над множеством тем: $\theta_d \sim Dirichlet(\alpha)$
1. Для каждого слова $w = 1, \dots, n_d$:
1. Сгенерировать тему из дискретного распределения $t \sim \theta_{d}$
1. Сгенерировать слово из дискретного распределения $w \sim \phi_{t}$.
Подробнее об этом в Википедии.
В контексте нашей задачи получается, что, используя данный генеративный процесс, можно создавать новые рецепты. Вы можете передать в функцию модель и число ингредиентов и сгенерировать рецепт :)
End of explanation
import pandas
import seaborn
from matplotlib import pyplot as plt
%matplotlib inline
def compute_topic_cuisine_matrix(model, corpus, recipes):
# составляем вектор целевых признаков
targets = list(set([recipe["cuisine"] for recipe in recipes]))
# составляем матрицу
tc_matrix = pandas.DataFrame(data=np.zeros((model.num_topics, len(targets))), columns=targets)
for recipe, bow in zip(recipes, corpus):
recipe_topic = model.get_document_topics(bow)
for t, prob in recipe_topic:
tc_matrix[recipe["cuisine"]][t] += prob
# нормируем матрицу
target_sums = pandas.DataFrame(data=np.zeros((1, len(targets))), columns=targets)
for recipe in recipes:
target_sums[recipe["cuisine"]] += 1
return pandas.DataFrame(tc_matrix.values/target_sums.values, columns=tc_matrix.columns)
def plot_matrix(tc_matrix):
plt.figure(figsize=(10, 10))
seaborn.heatmap(tc_matrix, square=True)
# Визуализируйте матрицу
plot_matrix(compute_topic_cuisine_matrix(lda_1, corpus, recipes))
plot_matrix(compute_topic_cuisine_matrix(lda_2, corpus2, recipes))
plot_matrix(compute_topic_cuisine_matrix(lda_3, corpus2, recipes))
Explanation: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn.
End of explanation |
14,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WrightTools for Numpy Users
As scientists transitioned to the Python Scientific Library during its rise in popularity, new users needed translations to their familiar software, such as Matlab. In the same way, it's important to compare Numpy strategies for organizing data with the framework of WrightTools. This notebook attempts to show how common tools of Numpy (especially advanced indexing) can translate to the WrightTools framework relatively easily.
These examples are concerned with the Data objects that retain the rectangular shapes of ndarrays. The benefits of numpy arrays can generally be accessed within WrightTools. It is a subset of the more general Data object representations that can be used (via making axes that are linear combinations of variables).
Don't forget to also consult the WrightTools documentation or examine WrightTools test scripts for examples.
numpy ndarrays --> data object
Step1: Note that we need to broadcast the variables into the data object. The reason is similar to why we need to broadcast x and y arrays when defining the z array
Step2: Indexing, advanced indexing
Under the hood, WrightTools objects, such as Data, Channel, and Variable, are a type of hdf5 object, not numpy arrays (for more info, read about the wt5 format). As such, slice commands emulate ndarray behavior, and only support a subset of all numpy array indexing tricks.
Access datasets as numpy arrays using slice commands
Step3: REMINDER
Step4: A quick workaround, of course, is to work directly with the ndarray view (i.e. data.z[
Step5: Be careful, though! Applying this indexing will not give you write capabilities
Step6: Alternatives to expanding dimensionality include making a new data object with expanded ndarrays, or using wt.data.join. We show how to expand dimensionality using wt.data.join further below.
Do not use boolean array indexing on channels and variables - consider split
Step7: data.z cannot be indexed with a boolean array
Step8: Here's a few more good examples of using split
Slicing (Keep Dimensionality) --> data.split
Step9: Slicing (Reduce Dimensionality) --> data.chop
Step10: Use chop to loop through reduced dimensions of the data.
Step11: For very large datasets, the second option is useful because you never deal with the whole collection. Thus you can loop through individual chop elements and close them after each iteration, saving memory.
np.newaxis (or arr[
Step12: We have added a new variable, but new_data does not increase dimensionality. Dimensionality corresponds to the array shapes, not the number of variables (the dimensionality would still be 2 even if Temperature changed for each x and y coordinate).
Even though the dimensionality has not changed, new_data now understands another axis is in play. The above procedure allows us to expand the dimensionality via join.
np.concatenate/tile/stack/block, etc --> wt.data.join
Case I
Step13: Note that this strategy can be used to undo the chop operation
Step14: Case II
Step15: Channel Array Math
Binary (channel + constant) operations
Step16: Binary (channel + channel) Operations
Step17: Variable Math
Variables require tricky syntax to change (the above channel math will not work). But the following will work
Step18: However, do you really want to change your variables, or just use different units? It's often the latter. In that case, apply convert to the axes
Step19: Maybe you do need to do math that is not a unit conversion (for instance, shift delay values). If needed, you can overwrite the old variable by removing it and renaming the new variable
Step20: Axes Math - transform
Axes are just expressions of variables, so the scope of Axis math is the linear combinations of variables (plus offsets). Keep in mind that calling linear combinations of variables will force the Data object to rectify the units of all variables involved. | Python Code:
import numpy as np
import WrightTools as wt
print(np.__version__) # tested on 1.18.1
print(wt.__version__) # tested on 3.3.1
x = np.linspace(0, 1, 5) # Hz
y = np.linspace(500, 700, 3) # nm
z = np.exp(-x[:, None]) * np.sqrt(y - 500)[None, :]
data = wt.Data()
data.create_channel(name="z", values=z)
# BE SURE TO BROADCAST REDUCED DIM VARIABLES--this is how wt.Data knows dimensionality
data.create_variable(name="x", values=x[:, None], units="Hz")
data.create_variable(name="y", values=y[None, :], units="nm")
data.transform("x", "y") # create axes
data.print_tree()
Explanation: WrightTools for Numpy Users
As scientists transitioned to the Python Scientific Library during its rise in popularity, new users needed translations to their familiar software, such as Matlab. In the same way, it's important to compare Numpy strategies for organizing data with the framework of WrightTools. This notebook attempts to show how common tools of Numpy (especially advanced indexing) can translate to the WrightTools framework relatively easily.
These examples are concerned with the Data objects that retain the rectangular shapes of ndarrays. The benefits of numpy arrays can generally be accessed within WrightTools. It is a subset of the more general Data object representations that can be used (via making axes that are linear combinations of variables).
Don't forget to also consult the WrightTools documentation or examine WrightTools test scripts for examples.
numpy ndarrays --> data object
End of explanation
print([ax.natural_name for ax in data.axes])
print(f"data.z.shape= {data.z.shape}")
data.transform("y", "x")
print([ax.natural_name for ax in data.axes]) # order of axes switches
print(f"data.z.shape = {data.z.shape}") # shape of channel does not
Explanation: Note that we need to broadcast the variables into the data object. The reason is similar to why we need to broadcast x and y arrays when defining the z array: otherwise it is not clear that the data is multidimensional. Failing to broadcast is a "gotcha", because WrightTools will still create data (just as with the z-array, you can make a 1D array from x and y); you will only run into problems once you try to work with the data.
WARNING: Array index order does not correspond to axes numbers!
End of explanation
print(type(data.z))
print(type(data.z[:])) # a view of the z values as a numpy array
print(data.z[:5, 2:3]) # typical slicing arguments works here as well
Explanation: Indexing, advanced indexing
Under the hood, WrightTools objects, such as Data, Channel, and Variable, are a type of hdf5 object, not numpy arrays (for more info, read about the wt5 format). As such, slice commands emulate ndarray behavior, and only support a subset of all numpy array indexing tricks.
Access datasets as numpy arrays using slice commands:
All regular slice commands work. Using a null slice returns a full view of the channel or variable as a numpy array.
End of explanation
try: # raises TypeError
data.z[..., np.newaxis] # or, equivalently, data.z[..., None]
except TypeError:
print("didn't work!")
Explanation: REMINDER: the relationship between axis number and channel indices is not fixed (cf. data.transform, above), and can be difficult to discern. For this reason, index slicing can get confusing quickly, especially if several dimensions have the same size. For a versatile option that leverages the strengths of WrightTools, use the split and chop methods to slice and iterate along dimensions, respectively. Examples using both methods are shown further below.
Do not use newaxis or None indexing to expand dimensionality
End of explanation
data.z[:][..., np.newaxis] # no error
data.z[:][np.isnan(data.z[:])] # no error
Explanation: A quick workaround, of course, is to work directly with the ndarray view (i.e. data.z[:]), which is a numpy array and accepts all regular numpy indexing tricks. For example, newaxis works here, as do other advanced indexing methods:
End of explanation
temp = data.copy(verbose=False)
temp.z[:][..., np.newaxis] *= -1 # no error, but z does not change because boolean indexing copies the view
print(np.all(temp.z[:] == data.z[:])) # temp.z is unchanged!
Explanation: Be careful, though! Applying this indexing will not give you write capabilities:
End of explanation
positive = z > 0 # first column is False
z_advind = z[positive] # traditional boolean array indexing with a numpy array
try:
data.z[positive] # doesn't work
except TypeError:
print("Boolean indexing of channel did not work!")
Explanation: Alternatives to expanding dimensionality include making a new data object with expanded ndarrays, or using wt.data.join. We show how to expand dimensionality using wt.data.join further below.
Do not use boolean array indexing on channels and variables - consider split
End of explanation
temp = data.copy()
temp.create_variable(name="zvar", values=positive)
zpositive = temp.split("zvar", [True])[1]
print(data.z[:], '\n')
print(positive, '\n')
print(zpositive.z[:], '\n')
Explanation: data.z cannot be indexed with a boolean array: instead, the split method provides the analogous indexing tricks. To use split, we first establish the boolean logic as an expression, and then use split to parse that expression.
For this example, we can pass the boolean logic it as a Variable, and then split based on the variable value:
End of explanation
z_subset = z[x >= 0.5]
data_subset = data.split("x", [0.5], units="Hz", verbose=False)[1]
print("ndim:", data_subset.ndim)
print(np.all(data_subset.z[:] == z_subset))
Explanation: Here's a few more good examples of using split
Slicing (Keep Dimensionality) --> data.split
End of explanation
z_subset = z[2] # z_subset = z[x==x[2]] is equivalent
data_subset = data.chop("y", at={"x": [data.x[2], "Hz"]}, verbose=False)[0]
print("ndim:", data_subset.ndim)
print(np.all(data_subset.z[:] == z_subset))
Explanation: Slicing (Reduce Dimensionality) --> data.chop
End of explanation
# option 1: iterate through collection
chop = data.chop("y")
for di in chop.values():
print(di.constants)
print("\r")
# option 2: iterate through points, use "at" kwarg
for xi in data.x.points:
di = data.chop("y", at={"x": [xi, data.x.units]}, verbose=False)[0]
print(di.constants)
Explanation: Use chop to loop through reduced dimensions of the data.
End of explanation
z_na = z[..., None]
new_data = data.copy()
new_data.create_variable(name="Temperature", values=np.ones((1, 1)))
# note the variable shape--variable is broadcast to all elements of the data
# optional: declare Temperature a constant via `create constant`
# new_data.create_constant("Temperature")
new_data.transform("x", "y", "Temperature")
print("z_na.shape: ", z_na.shape, f" ({z_na.ndim}D)")
print("new_data.shape: ", new_data.shape, f" ({new_data.ndim}D)")
Explanation: For very large datasets, the second option is useful because you never deal with the whole collection. Thus you can loop through individual chop elements and close them after each iteration, saving memory.
np.newaxis (or arr[:, None]) --> create_variable
For a Data object to understand another dimension, create a new variable for the dataset (and transform to the new variable). Since np.newaxis makes an orthogonal array dimension, the new variable will be a constant over the data currently spanned:
End of explanation
new_data2 = data.copy(verbose=False)
new_data2.create_variable(name="Temperature", values=np.ones((1, 1))*2)
# new_data2.create_constant("Temperature")
new_data2.transform("x", "y", "Temperature")
data_with_temps = wt.data.join([new_data, new_data2])
data_with_temps.print_tree()
Explanation: We have added a new variable, but new_data does not increase dimensionality. Dimensionality corresponds to the array shapes, not the number of variables (the dimensionality would still be 2 even if Temperature changed for each x and y coordinate).
Even though the dimensionality has not changed, new_data now understands another axis is in play. The above procedure allows us to expand the dimensionality via join.
np.concatenate/tile/stack/block, etc --> wt.data.join
Case I: increase dimensionality (stack, block)
If we have two datasets with a trivial dimension of different values, we can combine them to achieve a higher dimensionality data object:
End of explanation
chopped = data.chop("y", verbose=False) # data objects in `chopped` have the same axes and points, but differing "y" values
# pre-condition data as higher dimensionality
for di in chopped.values():
di.transform("x", "y")
stacked = wt.data.join(chopped.values(), name="stacked", verbose=False)
stacked.print_tree()
print(np.all(stacked.z[:] == data.z[:]))
Explanation: Note that this strategy can be used to undo the chop operation:
End of explanation
splitted = data.split("x", [0.5], units="Hz") # data objects with the same axes, but different points
concatenated = wt.data.join(splitted.values(), name="concatenated", verbose=False)
print(data.shape, concatenated.shape) # note: different shapes can arise!
print(np.all(data.z[:].T == concatenated.z[:]))
Explanation: Case II: same dimensionality/mixed dimensionality (concatenate)
This problem is equivalent to inverting split. Note that this rectification will only recompose the array shape to within a transpose.
End of explanation
z **= 2
data.z **= 2 # works for +, -, /, *, **
print(np.all(data.z[:] == z))
data.z **= 0.5
z **= 0.5
Explanation: Channel Array Math
Binary (channel + constant) operations
End of explanation
# within the same data object:
data.create_channel(name="zed", values=-data.z[:])
data.zed += data.z
print(data.zed[:])
data.remove_channel("zed")
# between two data objects
data2 = data.copy(verbose=False)
data2.z += data.z
print(np.all(data2.z[:] == 2 * data.z[:]))
data2.close()
Explanation: Binary (channel + channel) Operations
End of explanation
change_x = data.copy(verbose=False)
x = change_x["x"] # the x reference is necessary to use setitem (*=, +=, etc.) syntax on a variable
x **=2
print(np.all(data["x"][:]**2 == change_x["x"][:]))
Explanation: Variable Math
Variables require tricky syntax to change (the above channel math will not work). But the following will work:
End of explanation
units_data = data.copy()
units_data.convert("wn") # all axes with frequency/wavelength units will be converted to wavenumbers
units_data.print_tree()
units_data.x.convert("Hz") # apply conversion only to x axis
units_data.print_tree()
Explanation: However, do you really want to change your variables, or just use different units? It's often the latter. In that case, apply convert to the axes:
End of explanation
print(*data.x[:])
# define replacement variable
data.create_variable(name="_x", values=np.linspace(0, 2, data.x.size).reshape(data["x"].shape), units=data.x.units)
# remove target variable
data.transform("y")
data.remove_variable("x")
# replace target variable
data.rename_variables(_x="x")
data.transform("x", "y")
data.print_tree()
print(*data.x[:])
Explanation: Maybe you do need to do math that is not a unit conversion (for instance, shift delay values). If needed, you can overwrite the old variable by removing it and renaming the new variable:
End of explanation
data = data.copy(verbose=False)
data.transform("2*x", "3*y") # do not use spaces when defining axes
print(*data.axes)
data.transform("2*x-y", "2*y") # note that axis[0] is now 2D
print(*data.axes)
data.transform("x-2", "y") # constant 2 is interpreted in units of `data.x.units`
print(*data.axes)
Explanation: Axes Math - transform
Axes are just expressions of variables, so the scope of Axis math is the linear combinations of variables (plus offsets). Keep in mind that calling linear combinations of variables will force the Data object to rectify the units of all variables involved.
End of explanation |
14,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Energy system optimisation with oemof - how to collect and store results
Import necessary modules
Step1: Specify solver
Step2: Create an energy system and optimize the dispatch at least costs.
Step3: Create and add components to energysystem
Step4: Optimization
Step5: Write results into energysystem.results object for later
Step6: Save results - Dump the energysystem (to ~/home/user/.oemof by default)
Specify path and filename if you do not want to overwrite | Python Code:
import os
import pandas as pd
from oemof.solph import (Sink, Source, Transformer, Bus, Flow, Model,
EnergySystem, processing, views)
import pickle
Explanation: Energy system optimisation with oemof - how to collect and store results
Import necessary modules
End of explanation
solver = 'cbc'
Explanation: Specify solver
End of explanation
# initialize and provide data
datetimeindex = pd.date_range('1/1/2016', periods=24*10, freq='H')
energysystem = EnergySystem(timeindex=datetimeindex)
filename = 'input_data.csv'
filename = os.path.join(os.getcwd(), filename)
data = pd.read_csv(filename, sep=",")
Explanation: Create an energy system and optimize the dispatch at least costs.
End of explanation
# resource buses
bcoal = Bus(label='coal', balanced=False)
bgas = Bus(label='gas', balanced=False)
boil = Bus(label='oil', balanced=False)
blig = Bus(label='lignite', balanced=False)
# electricity and heat
bel = Bus(label='bel')
bth = Bus(label='bth')
energysystem.add(bcoal, bgas, boil, blig, bel, bth)
# an excess and a shortage variable can help to avoid infeasible problems
energysystem.add(Sink(label='excess_el', inputs={bel: Flow()}))
# shortage_el = Source(label='shortage_el',
# outputs={bel: Flow(variable_costs=200)})
# sources
energysystem.add(Source(label='wind', outputs={bel: Flow(
fix=data['wind'], nominal_value=66.3)}))
energysystem.add(Source(label='pv', outputs={bel: Flow(
fix=data['pv'], nominal_value=65.3)}))
# demands (electricity/heat)
energysystem.add(Sink(label='demand_el', inputs={bel: Flow(
nominal_value=85, fix=data['demand_el'])}))
energysystem.add(Sink(label='demand_th',
inputs={bth: Flow(nominal_value=40,
fix=data['demand_th'],
fixed=True)}))
# power plants
energysystem.add(Transformer(
label='pp_coal',
inputs={bcoal: Flow()},
outputs={bel: Flow(nominal_value=20.2, variable_costs=25)},
conversion_factors={bel: 0.39}))
energysystem.add(Transformer(
label='pp_lig',
inputs={blig: Flow()},
outputs={bel: Flow(nominal_value=11.8, variable_costs=19)},
conversion_factors={bel: 0.41}))
energysystem.add(Transformer(
label='pp_gas',
inputs={bgas: Flow()},
outputs={bel: Flow(nominal_value=41, variable_costs=40)},
conversion_factors={bel: 0.50}))
energysystem.add(Transformer(
label='pp_oil',
inputs={boil: Flow()},
outputs={bel: Flow(nominal_value=5, variable_costs=50)},
conversion_factors={bel: 0.28}))
# combined heat and power plant (chp)
energysystem.add(Transformer(
label='pp_chp',
inputs={bgas: Flow()},
outputs={bel: Flow(nominal_value=30, variable_costs=42),
bth: Flow(nominal_value=40)},
conversion_factors={bel: 0.3, bth: 0.4}))
# heat pump with a coefficient of performance (COP) of 3
b_heat_source = Bus(label='b_heat_source')
energysystem.add(b_heat_source)
energysystem.add(Source(label='heat_source', outputs={b_heat_source: Flow()}))
cop = 3
energysystem.add(Transformer(
label='heat_pump',
inputs={bel: Flow(),
b_heat_source: Flow()},
outputs={bth: Flow(nominal_value=10)},
conversion_factors={bel: 1/3, b_heat_source: (cop-1)/cop}))
Explanation: Create and add components to energysystem
End of explanation
# create optimization model based on energy_system
optimization_model = Model(energysystem=energysystem)
# solve problem
optimization_model.solve(solver=solver,
solve_kwargs={'tee': True, 'keepfiles': False})
Explanation: Optimization
End of explanation
energysystem.results['main'] = processing.results(optimization_model)
energysystem.results['meta'] = processing.meta_results(optimization_model)
string_results = views.convert_keys_to_strings(energysystem.results['main'])
Explanation: Write results into energysystem.results object for later
End of explanation
energysystem.dump(dpath=None, filename=None)
Explanation: Save results - Dump the energysystem (to ~/home/user/.oemof by default)
Specify path and filename if you do not want to overwrite
End of explanation |
14,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = None
parameters['b' + str(l)] = None
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = None
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = None
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = None
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.17007265 0.2524272 ]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = None
db = None
dA_prev = None
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = None
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = None
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = None
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = None
dA_prev_temp, dW_temp, db_temp = None
grads["dA" + str(l + 1)] = None
grads["dW" + str(l + 1)] = None
grads["db" + str(l + 1)] = None
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = None
parameters["b" + str(l+1)] = None
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation |
14,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn on wind tunnel
Only 1DoF for RIG roll movement
Use small-amplitude aileron command of CMP as inputs (in degrees)
$$U = \delta_{a,cmp}(t)$$
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} C_{l,\delta_a,cmp} \ C_{lp,cmp} \end{pmatrix} $$
Startup computation engines
Step1: Data preparation
Load raw data
Step2: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
Step3: Input $\delta_T$ and focused time ranges
Step4: Resample and filter data in sections
For each section,
* Select time range and shift it to start from zero;
* Resample Time, Inputs, Outputs in unique $\delta_T$;
* Smooth Input/Observe data if flag bit0 is set;
* Take derivatives of observe data if flag bit1 is set.
Step5: Define dynamic model to be estimated
$$\left{\begin{matrix}\begin{align}
M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \
M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \
M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \
M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right )
\end{align}\end{matrix}\right.$$
Step6: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
Step7: Optimize using ML
Step8: Show and test results | Python Code:
%run matt_startup
%run -i matt_utils
button_qtconsole()
#import other needed modules in all used engines
#with dview.sync_imports():
# import os
Explanation: Parameter Estimation of RIG Roll Experiments
Setup and descriptions
Without ACM model
Turn on wind tunnel
Only 1DoF for RIG roll movement
Use small-amplitude aileron command of CMP as inputs (in degrees)
$$U = \delta_{a,cmp}(t)$$
Consider RIG roll angle and its derivative as States (in radians)
$$X = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Observe RIG roll angle and its derivative as Outputs (in degrees)
$$Z = \begin{pmatrix} \phi_{rig} \ \dot{\phi}_{rig} \end{pmatrix}$$
Use output error method based on most-likehood(ML) to estimate
$$ \theta = \begin{pmatrix} C_{l,\delta_a,cmp} \ C_{lp,cmp} \end{pmatrix} $$
Startup computation engines
End of explanation
filename = 'FIWT_Exp036_20150605144438.dat.npz'
def loadData():
# Read and parse raw data
global exp_data
exp_data = np.load(filename)
# Select colums
global T_cmp, da_cmp
T_cmp = exp_data['data33'][:,0]
da_cmp = exp_data['data33'][:,3]
global T_rig, phi_rig
T_rig = exp_data['data44'][:,0]
phi_rig = exp_data['data44'][:,2]
loadData()
text_loadData()
Explanation: Data preparation
Load raw data
End of explanation
def checkInputOutputData():
#check inputs/outputs
fig, ax = plt.subplots(2,1,True)
ax[1].plot(T_cmp,da1_cmp,'r', picker=2)
ax[0].plot(T_rig,phi_rig, 'b', picker=2)
ax[1].set_ylabel('$\delta \/ / \/ ^o$')
ax[0].set_ylabel('$\phi \/ / \/ ^o/s$')
ax[1].set_xlabel('$T \/ / \/ s$', picker=True)
ax[0].set_title('Output', picker=True)
fig.canvas.mpl_connect('pick_event', onPickTime)
fig.show()
display(fig)
button_CheckData()
Explanation: Check time sequence and inputs/outputs
Click 'Check data' button to show the raw data.
Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
End of explanation
# Pick up focused time ranges
time_marks = [
[17.6940178696,117,"ramp u1"],
[118.7,230.395312673,"ramp d1"],
[258.807481992,357.486188688,"ramp u2"],
[359.463122988,459.817499014,"ramp d2"],
[461.067939262,558.784538108,"ramp d3"],
[555.553175853,658.648739191,"ramp u4"],
]
# Decide DT,U,Z and their processing method
DT=0.5
process_set = {
'Z':[(T_cmp, da_cmp,1),],
'U':[(T_rig, phi_rig,1),],
'cutoff_freq': 1 #Hz
}
U_names = ['$\phi_{a,rig} \, / \, ^o$']
Y_names = Z_names = ['$\delta_{a,cmp} \, / \, ^o$']
display_data_prepare()
Explanation: Input $\delta_T$ and focused time ranges
End of explanation
resample(True);
Explanation: Resample and filter data in sections
For each section,
* Select time range and shift it to start from zero;
* Resample Time, Inputs, Outputs in unique $\delta_T$;
* Smooth Input/Observe data if flag bit0 is set;
* Take derivatives of observe data if flag bit1 is set.
End of explanation
%%px --local
#update common const parameters in all engines
angles = range(-170,171,10)
angles_num = len(angles)
#problem size
Nx = 0
Nu = 1
Ny = 1
Npar = 2*angles_num
#reference
S_c = 0.1254 #S_c(m2)
b_c = 0.7 #b_c(m)
g = 9.81 #g(m/s2)
#static measurement
m_T = 9.585 #m_T(kg)
l_z_T = 0.0416 #l_z_T(m)
V = 30 #V(m/s)
#previous estimations
F_c = 0.06 #F_c(N*m)
#for short
qbarSb = 0.5*1.225*V*V*S_c*b_c
_m_T_l_z_T_g = -(m_T*l_z_T)*g
angles_cmpx = [-41, -35, -30, -25, -20, -15, -10, -5, 5, 10, 15, 20, 25, 30, 35, 41]
Clda_cmpx = np.array([[ 0.0565249, 0.05387389, 0.04652094, 0.03865756, 0.03113176, 0.02383392,
0.01576805, 0.00682421, -0.00514735, -0.01570602, -0.02364258, -0.02733501,
-0.0334053, -0.03834931, -0.04546589, -0.05275809],
[ 0.04793643, 0.04133362, 0.03510728, 0.02872704, 0.02349928, 0.01987537,
0.01354014, 0.00666756, -0.00498937, -0.01074795, -0.01714402, -0.02095374,
-0.02588346, -0.03155207, -0.03824692, -0.04752196],
[ 0.04250404, 0.03700739, 0.03057273, 0.02586595, 0.01989991, 0.01626642,
0.01237347, 0.00642308, -0.00551828, -0.01030141, -0.01654802, -0.02143956,
-0.0268223, -0.03110662, -0.03565326, -0.04001201,],
[ 0.03448653, 0.03160534, 0.02700394, 0.02388641, 0.01988023, 0.01532191,
0.01052986, 0.00572438, -0.00556645, -0.0101715, -0.01388348, -0.01619465,
-0.02044906, -0.02510722, -0.03134562, -0.0355593 ]])
Clda_cmp = np.sum(Clda_cmpx, axis=0)
Lda_cmp = qbarSb*Clda_cmp
Da_cmp = scipy.interpolate.interp1d(Lda_cmp, angles_cmpx)
hdr = int(1/DT)
def obs(Z,T,U,params):
s = T.size
unk_moment = scipy.interpolate.interp1d(angles, params[0:angles_num])
col_fric = scipy.interpolate.interp1d(angles, params[angles_num:angles_num*2])
phi = U[:s,0]
phi_diff = np.copysign(-1,phi[hdr:]-phi[:-hdr])
phi_diff = np.concatenate((np.ones(hdr-1)*phi_diff[0], phi_diff, np.ones(hdr-1)*phi_diff[-1]))
Da = Da_cmp(-unk_moment(phi)-col_fric(phi_diff))
return Da.reshape((-1,1))
display(HTML('<b>Constant Parameters</b>'))
table = ListTable()
table.append(['Name','Value','unit'])
table.append(['$S_c$',S_c,'$m^2$'])
table.append(['$b_c$',b_c,'$m$'])
table.append(['$g$',g,'$m/s^2$'])
table.append(['$m_T$',m_T,'$kg$'])
table.append(['$l_{zT}$',l_z_T,'$m$'])
table.append(['$V$',V,'$m/s$'])
display(table)
Explanation: Define dynamic model to be estimated
$$\left{\begin{matrix}\begin{align}
M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \
M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \
M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \
M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right )
\end{align}\end{matrix}\right.$$
End of explanation
#initial guess
param0 = [_m_T_l_z_T_g*math.sin(a/57.3) for a in angles]+[F_c]*angles_num
param_name = ['Mu_{}'.format(angles[i]) for i in range(angles_num)] \
+ ['Fc_{}'.format(angles[i]) for i in range(angles_num)]
param_unit = ['Nm']*(2*angles_num)
NparID = Npar
opt_idx = range(Npar)
opt_param0 = [param0[i] for i in opt_idx]
par_del = [0.01]*(2*angles_num)
bounds = [(-4,4)]*(angles_num) + [(0,0.4)]*(angles_num)
display_default_params()
#select sections for training
section_idx = range(len(time_marks))
display_data_for_train()
#push parameters to engines
push_opt_param()
# select 4 section from training data
#idx = random.sample(section_idx, 4)
idx = section_idx[:]
# interact_guess();
update_guess();
Explanation: Initial guess
Input default values and ranges for parameters
Select sections for trainning
Adjust parameters based on simulation results
Decide start values of parameters for optimization
End of explanation
display_preopt_params()
if False:
InfoMat = None
method = 'trust-ncg'
def hessian(opt_params, index):
global InfoMat
return InfoMat
dview['enable_infomat']=True
options={'gtol':1}
opt_bounds = None
else:
method = 'L-BFGS-B'
hessian = None
dview['enable_infomat']=False
options={'ftol':1e-6,'maxfun':400}
opt_bounds = bounds
cnt = 0
tmp_rslt = None
T0 = time.time()
print('#cnt, Time, |R|')
%time res = sp.optimize.minimize(fun=costfunc, x0=opt_param0, \
args=(opt_idx,), method=method, jac=True, hess=hessian, \
bounds=opt_bounds, options=options)
Explanation: Optimize using ML
End of explanation
display_opt_params()
# show result
idx = range(len(time_marks))
display_data_for_test();
update_guess();
res_params = res['x']
params = param0[:]
for i,j in enumerate(opt_idx):
params[j] = res_params[i]
k1 = np.array(params[0:angles_num])
k2 = np.array(params[angles_num:angles_num*2])
print('angeles = ')
print(angles)
print('L_unk = ')
print(k1)
print('F_c = ')
print(k2)
%matplotlib inline
plt.figure(figsize=(12,12),dpi=300)
plt.subplot(211)
plt.plot(angles, k1, 'r')
plt.ylabel('$L_{unk} \, / \, Nm$')
plt.subplot(212)
plt.plot(angles, k2, 'g')
plt.ylabel('$F_c \, / \, Nm$')
plt.xlabel('$\phi \, / \, ^o/s$')
plt.show()
toggle_inputs()
button_qtconsole()
(-0.05-0.05)/(80/57.3)
Clda_cmp/4
Explanation: Show and test results
End of explanation |
14,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supplementary material for "Mesoscale to submesoscale wavenumber spectra in Drake Passage" (in prep. for JPO)
C. B. Rocha, T. K. Chereskin, S. T. Gille, and D. Menemenlis
This notebook showcases the use of decomposition of one-dimensional kinetic energy (KE) spectra into rotational and divergent components using the decomposition proposed by Bühler et al. JFM 2014. We implemented this decomposition in Python. The function "spec_helm_decomp.py" takes the one-dimensional along-track spectra of across-track and along-track velocity components and returns the corresponding rotational and divergent spectra. This function is part of pyspec, a legit Python package for spectral analysis that was developed as part of this project (openly available on github). If you do not want to install pyspec, you can always download the specific module (helmholtz.py) and import it into Python, or copy the function spec_helm_decomp into your code and call it directly.
For details about the calculation see appendix C of Rocha et al. (in prep.) and the original paper by Bühler et al. JFM 2014.
Step2: Some plotting stuff
Not needed, but make the plots nicer.
Step3: Testing
First we test the decomposition with a synthetic spectrum to check its correcteness and assess its accuracy.
Step4: We create two sets of one-dimensional spectra that follows a $k^{-3}$ power law
Step5: One is solely horizontally rotational (nondivergent)
Step6: The other is purely horizontally divergent (irrotarional)
Step7: Notice that the figure above shows that there is a residual in the decomposition. This is because we do not know $\hat{C}^u$ and $\hat{C}^v$ down to $k = \infty$. Thus we stop short of $\infty$ in the numerical integration (Eqs. C2 and C3 of Rocha et al.). Among other things, the structure and magnitude of the residual depends on the redness (residual is smaller for redder spectra). In practice, for spectra that approximately follow a $k^{-3}$ power law, we find that the residual is significant only at very high wanumbers. For instance, in the example plotted above, the e error is only larger than $10\%$ at scales smaller than $1.6$ km.
An example with the Drake Passage spectrum
We now apply the Bühler et al. 2014 decomposition to a real-ocean spectrum. We use the KE spectrum of upper most layer considered in Rocha et al. (in prep.) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# if you don't have pyspec installed comment this out
# and follow the instructions below
from pyspec import helmholtz as helm
# copy helmholts.py into your working directory and import it
# (just uncomment the line below)
# import helmholtz as helm
Explanation: Supplementary material for "Mesoscale to submesoscale wavenumber spectra in Drake Passage" (in prep. for JPO)
C. B. Rocha, T. K. Chereskin, S. T. Gille, and D. Menemenlis
This notebook showcases the use of decomposition of one-dimensional kinetic energy (KE) spectra into rotational and divergent components using the decomposition proposed by Bühler et al. JFM 2014. We implemented this decomposition in Python. The function "spec_helm_decomp.py" takes the one-dimensional along-track spectra of across-track and along-track velocity components and returns the corresponding rotational and divergent spectra. This function is part of pyspec, a legit Python package for spectral analysis that was developed as part of this project (openly available on github). If you do not want to install pyspec, you can always download the specific module (helmholtz.py) and import it into Python, or copy the function spec_helm_decomp into your code and call it directly.
For details about the calculation see appendix C of Rocha et al. (in prep.) and the original paper by Bühler et al. JFM 2014.
End of explanation
# set figure params: bigger fonts, labels, etc.
plt.rcParams.update({'font.size': 25, 'legend.handlelength' : 1.5
, 'legend.markerscale': 1.})
plt.rc('xtick', labelsize=22)
plt.rc('ytick', labelsize=22)
# some colors (prettier than default boring colors)
color2 = '#6495ed'
color1 = '#ff6347'
color5 = '#8470ff'
color3 = '#3cb371'
color4 = '#ffd700'
color6 = '#ba55d3'
lw=3 # linewith
aph=.7 # transparency
# avoid typing the exact same lines of code many times
def plt_labels(ax):
write KE spectrum labels
ax.set_xlabel("Along-track wavenumber [cpkm]")
ax.set_ylabel(r"KE spectral density [m$^2$ s$^{-2}$/cpkm]")
Explanation: Some plotting stuff
Not needed, but make the plots nicer.
End of explanation
k = np.linspace(.5*1e-2,1.,250)
dk = k[1]-k[0]
Explanation: Testing
First we test the decomposition with a synthetic spectrum to check its correcteness and assess its accuracy.
End of explanation
E3 = 1./k**3
KEaux = 2*E3.sum()*dk
E3 = E3/KEaux
Explanation: We create two sets of one-dimensional spectra that follows a $k^{-3}$ power law
End of explanation
Cu_rot, Cv_rot = 3*E3, E3 # purely rotational
Explanation: One is solely horizontally rotational (nondivergent)
End of explanation
Cu_div, Cv_div = E3, 3*E3 # purely divergent
Cpsi_rot, Cphi_rot = helm.spec_helm_decomp(k,Cu_rot, Cv_rot)
Cpsi_div, Cphi_div = helm.spec_helm_decomp(k,Cu_div, Cv_div)
fig = plt.figure(facecolor='w', figsize=(16.,8.))
ax = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax.loglog(k,Cu_rot/2,color=color1,linewidth=lw,\
label=r'$\hat{C}^u$: across-track')
ax.loglog(k,Cv_rot/2,color=color2,linewidth=lw,\
label=r'$\hat{C}^v$: along-track')
ax.loglog(k,Cpsi_rot/2.,color=color3,linewidth=lw,\
label=r'$\hat{C}^\psi$: rotational')
ax.loglog(k,Cphi_rot/2.,'g',color=color4,linewidth=lw,\
label=r'$\hat{C}^\phi$: divergent')
ax2.loglog(k,Cu_div/2,color=color1,linewidth=lw,\
label=r'$\hat{C}^u$: across-track')
ax2.loglog(k,Cv_div/2,color=color2,linewidth=lw,\
label=r'$\hat{C}^v$: along-track')
ax2.loglog(k,Cpsi_div/2.,color=color3,linewidth=lw,\
label=r'$\hat{C}^\psi$: rotational')
ax2.loglog(k,Cphi_div/2.,'g',color=color4,linewidth=lw,\
label=r'$\hat{C}^\phi$: divergent')
lg = ax2.legend(loc=3)
plt_labels(ax2)
plt_labels(ax)
Explanation: The other is purely horizontally divergent (irrotarional)
End of explanation
data_path = './outputs/'
slab1=np.load(data_path+'adcp_spec_slab1.npz')
Cpsi_slab1, Cphi_slab1 = helm.spec_helm_decomp(slab1['k'],slab1['Eu'], slab1['Ev'])
fig = plt.figure(facecolor='w', figsize=(7.,8.))
ax1 = fig.add_subplot(111)
ax1.loglog(slab1['k'],slab1['Eu']/2,color=color1,linewidth=lw,\
label=r'$\hat{C}^u$: across-track')
ax1.loglog(slab1['k'],slab1['Ev']/2.,color=color2,linewidth=lw,\
label=r'$\hat{C}^v$: along-track')
ax1.loglog(slab1['k'],Cpsi_slab1/2.,color=color3,linewidth=lw,\
label=r'$\hat{C}^\psi$: rotational')
ax1.loglog(slab1['k'],Cphi_slab1/2.,color=color4,linewidth=lw,\
label=r'$\hat{C}^\phi$: divergent')
lg = ax1.legend(loc=3)
plt_labels(ax1)
Explanation: Notice that the figure above shows that there is a residual in the decomposition. This is because we do not know $\hat{C}^u$ and $\hat{C}^v$ down to $k = \infty$. Thus we stop short of $\infty$ in the numerical integration (Eqs. C2 and C3 of Rocha et al.). Among other things, the structure and magnitude of the residual depends on the redness (residual is smaller for redder spectra). In practice, for spectra that approximately follow a $k^{-3}$ power law, we find that the residual is significant only at very high wanumbers. For instance, in the example plotted above, the e error is only larger than $10\%$ at scales smaller than $1.6$ km.
An example with the Drake Passage spectrum
We now apply the Bühler et al. 2014 decomposition to a real-ocean spectrum. We use the KE spectrum of upper most layer considered in Rocha et al. (in prep.)
End of explanation |
14,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculating Annotation Coverage
This section shows how to calculate annotation coverage as described here
Step1: 2. Read associations
2a. You can read the associations one species at a time...
Step2: 2b. Or you can read 'gene2go' once and load all species...
Step3: 3. Import protein-coding information for human and fly
Step4: 4. Calculate Gene Ontology coverage
Store GO coverage information for human and fly in the list, cov_data.
Step5: 5 Report Gene Ontology coverage for human and fly
Print the human and fly GO coverage information that is stored in the list, cov_data. | Python Code:
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
Explanation: Calculating Annotation Coverage
This section shows how to calculate annotation coverage as described here:
Annotation coverage of Gene Ontology (GO) terms to individual
gene products is high for human or model organisms:
* 87% of ~20k human protein-coding genes have GO annotations
* 76% of ~14k fly protein-coding genes have GO annotations
(Apr 27, 2016)
1. Download associations
NCBI's gene2go file contains annotations of GO terms to Entrez GeneIDs for over 35 different species. We are interested in human and fly which have the taxids 9606 and 7227 repectively.
End of explanation
from goatools.associations import read_ncbi_gene2go
geneid2gos_human = read_ncbi_gene2go(gene2go, taxids=[9606])
geneid2gos_fly = read_ncbi_gene2go(gene2go, taxids=[7227])
Explanation: 2. Read associations
2a. You can read the associations one species at a time...
End of explanation
from collections import defaultdict, namedtuple
taxid2asscs = defaultdict(lambda: defaultdict(lambda: defaultdict(set)))
geneid2gos_all = read_ncbi_gene2go(
gene2go,
taxids=[9606, 7227],
taxid2asscs=taxid2asscs)
Explanation: 2b. Or you can read 'gene2go' once and load all species...
End of explanation
from goatools.test_data.genes_NCBI_9606_ProteinCoding import GeneID2nt as GeneID2nt_human
from goatools.test_data.genes_NCBI_7227_ProteinCoding import GeneID2nt as GeneID2nt_fly
lst = [
(9606, GeneID2nt_human),
(7227, GeneID2nt_fly)
]
Explanation: 3. Import protein-coding information for human and fly
End of explanation
cov_data = []
NtCov = namedtuple("NtCov", "taxid num_GOs num_covgenes coverage num_allgenes")
for taxid, pcGeneID2nt in lst:
# Get GeneID2GOs association for current species
geneid2gos = taxid2asscs[taxid]['GeneID2GOs']
# Restrict GeneID2GOs to only protein-coding genes for this report
pcgene_w_gos = set(geneid2gos.keys()).intersection(set(pcGeneID2nt.keys()))
num_pcgene_w_gos = len(pcgene_w_gos)
num_pc_genes = len(pcGeneID2nt)
# Number of GO terms annotated to protein-coding genes
gos_pcgenes = set()
for geneid in pcgene_w_gos:
gos_pcgenes |= geneid2gos[geneid]
# Print report data
cov_data.append(NtCov(
taxid = taxid,
num_GOs = len(gos_pcgenes),
num_covgenes = num_pcgene_w_gos,
coverage = 100.0*num_pcgene_w_gos/num_pc_genes,
num_allgenes = num_pc_genes))
Explanation: 4. Calculate Gene Ontology coverage
Store GO coverage information for human and fly in the list, cov_data.
End of explanation
from __future__ import print_function
print(" taxid GOs GeneIDs Coverage")
print("------ ------ ------- ----------------------")
fmtstr = "{TAXID:>6} {N:>6,} {M:>7,} {COV:2.0f}% GO coverage of {TOT:,} protein-coding genes"
for nt in cov_data:
print(fmtstr.format(
TAXID = nt.taxid,
N = nt.num_GOs,
M = nt.num_covgenes,
COV = nt.coverage,
TOT = nt.num_allgenes))
Explanation: 5 Report Gene Ontology coverage for human and fly
Print the human and fly GO coverage information that is stored in the list, cov_data.
End of explanation |
14,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 50
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
set(labels)
from sklearn.preprocessing import OneHotEncoder
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
lb.classes_
[labels_vecs[0],
codes[0]]
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
for train_indices, holdout_indices in ss.split(codes, labels_vecs):
train_x, train_y = codes[train_indices], labels_vecs[train_indices]
holdout_x, holdout_y = codes[holdout_indices], labels_vecs[holdout_indices]
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.5)
for val_indices, test_indices in ss.split(holdout_x, holdout_y):
val_x, val_y = codes[val_indices], labels_vecs[val_indices]
test_x, test_y = codes[test_indices], labels_vecs[test_indices]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
import tflearn
tf.reset_default_graph()
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
net = tflearn.input_data(placeholder=inputs_)
net = tflearn.fully_connected(net, 1000, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 50, activation='ReLU')
logits = tflearn.fully_connected(net, 5, activation='softmax')
net = tflearn.regression(logits, optimizer='adam', loss='categorical_crossentropy')
model = tflearn.DNN(net, checkpoint_path='checkpoints/ckpt-')
# TODO: Classifier layers and operations
# logits = # output layer logits
# cost = # cross entropy loss
# optimizer = # training optimizer
# Operations for validation/test accuracy
# predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
#saver = tf.train.Saver()
# with model.session as sess:
model.fit(train_x, train_y, validation_set=(val_x, val_y), show_metric=True, batch_size=500, n_epoch=25)
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
model.load('checkpoints/ckpt--150')
model.predict()
model.evaluate(test_x, test_y, 500)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints/ckpt-'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
#test_img_path = 'flower_photos/roses/102501987_3cdb8e5394_n.jpg'
#test_img_path = 'flower_photos/daisy/100080576_f52e8ee070_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
# saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
# feed = {inputs_: code}
# prediction = sess.run(predicted, feed_dict=feed).squeeze()
model.load('checkpoints/ckpt--150')
prediction = model.predict(code)
prediction_label = model.predict_label(code)
plt.imshow(test_img)
plt.barh(np.arange(5), prediction[0])
_ = plt.yticks(np.arange(5), lb.classes_)
prediction
prediction_label
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
14,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 訓練後の整数量子化
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: MNIST モデルをビルドする
MNIST データセットから、数字を分類する単純なモデルを構築します。
モデルのトレーニングは 5 エポックしか行わないため、時間はかかりません。およそ 98% の精度に達します。
Step3: TensorFlow Lite モデルに変換する
次に、TFLiteConverter API を使用してトレーニング済みのモデルを TensorFlow Lite 形式に変換し、様々な程度で量子化を適用できます。
量子化のバージョンの中には、一部のデータを浮動小数点数のフォーマットに残すものもあることに注意してください。そのため、以下のセクションでは、完全に int8 または unit8 データのモデルを得るまで、各オプションの量子化の量を増加しています。(各セクションのコードは、オプションごとにすべての量子化のステップを確認できるように、重複していることに注意してください。)
まず、量子化なしで変換されたモデルです。
Step4: TensorFlow Lite モデルになってはいますが、すべてのパラメータデータには 32 ビット浮動小数点値が使用されています。
ダイナミックレンジ量子化による変換
では、デフォルトの optimizations フラグを有効にして、すべての固定パラメータ(重みなど)を量子化しましょう。
Step5: 重みが量子化されたためモデルは多少小さくなりましたが、他の変数データはまだ浮動小数点数フォーマットのままです。
浮動小数点数フォールバック量子化による変換
変数データ(モデル入力/出力やレイヤー間の中間データ)を量子化するには、RepresentativeDataset を指定する必要があります。これは、代表値を示すのに十分な大きさのある一連の入力データを提供するジェネレータ関数です。コンバータがすべての変数データのダイナミックレンジを推測できるようにします。(トレーニングや評価データセットとは異なり、このデータセットは一意である必要はありません。)複数の入力をサポートするために、それぞれの代表的なデータポイントはリストで、リストの要素はインデックスに従ってモデルに供給されます。
Step6: すべての重みと変数データが量子化されたため、元の TensorFlow Lite モデルにくらべてはるかに小さくなりました。
ただし、従来的に浮動小数点数モデルの入力テンソルと出力テンソルを使用するアプリケーションとの互換性を維持するために、TensorFlow Lite Converter は、モデルの入力テンソルと出力テンソルを浮動小数点数に残しています。
Step7: 互換性を考慮すれば、大抵においては良いことではありますが、Edge TPU など、整数ベースの演算のみを実行するデバイスには対応していません。
さらに、上記のプロセスでは、TensorFlow Lite が演算用の量子化の実装を含まない場合、その演算を浮動小数点数フォーマットに残す可能性があります。このストラテジーでは、より小さく効率的なモデルを得られるように変換を完了することが可能ですが、やはり、整数のみのハードウェアには対応しません。(この MNIST モデルのすべての op には量子化された実装が含まれています。)
そこで、エンドツーエンドの整数限定モデルを確実に得られるよう、パラメータをいくつか追加する必要があります。
整数限定量子化による変換
入力テンソルと出力テンソルを量子化し、量子化できない演算に遭遇したらコンバーターがエラーをスローするようにするには、追加パラメータをいくつか使用して、モデルを変換し直します。
Step8: 内部の量子化は上記と同じままですが、入力テンソルと出力テンソルが整数フォーマットになっているのがわかります。
Step9: これで、モデルの入力テンソルと出力テンソルに整数データを強いようする整数量子化モデルを得られました。Edge TPU などの整数限定ハードウェアに対応しています。
モデルをファイルとして保存する
モデルを他のデバイスにデプロイするには、.tflite ファイルが必要となります。そこで、変換されたモデルをファイルに保存して、以下の推論を実行する際に読み込んでみましょう。
Step10: TensorFlow Lite モデルを実行する
では、TensorFlow Lite Interpreter を使用して推論を実行し、モデルの精度を比較しましょう。
まず、特定のモデルと画像を使って推論を実行し、予測を返す関数が必要です。
Step11: 1つの画像に対してモデルを検証する
次に、浮動小数点数モデルと量子化モデルのパフォーマンスを比較します。
tflite_model_file は、浮動小数点数データを持つ元の TensorFlow Lite モデルです。
tflite_model_quant_file は、整数限定量子化を使用して変換した最後のモデルです(入力と出力に unit8 データを使用します)。
もう一つ、予測を出力する関数を作成しましょう。
Step12: では、浮動小数点数モデルをテストします。
Step13: 今度は量子化されたモデル(uint8データを使用する)を検証します
Step14: モデルを評価する
このチュートリアルの冒頭で読み込んだすテスト画像をすべて使用して、両方のモデルを実行しましょう。
Step15: 浮動小数点数モデルを評価します。
Step16: uint8データを使用した完全に量子化されたモデルで評価を繰り返します | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
Explanation: 訓練後の整数量子化
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/lite/performance/post_training_integer_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
整数量子化は、32 ビット浮動小数点数(重みや活性化出力など)を最も近い 8 ビット固定小数点数に変換する最適化ストラテジーです。これにより、より小さなモデルが生成され、推論速度が増加するため、マイクロコントローラーといった性能の低いデバイスにとって貴重となります。このデータ形式は、Edge TPU などの整数のみのアクセラレータでも必要とされています。
このチュートリアルでは、MNIST モデルを新規にトレーニングし、それを TensorFlow Lite ファイルに変換して、トレーニング後量子化を使用して量子化します。最後に、変換されたモデルの精度を確認し、元の浮動小数点モデルと比較します。
モデルをどれくらい量子化するかについてのオプションには、実際いくつかあります。他のストラテジーでは、一部のデータが浮動小数点数のままとなることがありますが、このチュートリアルでは、すべての重みと活性化出力を 8 ビット整数データに変換する「全整数量子化」を実行します。
さまざまな量子化ストラテジーについての詳細は、TensorFlow Lite モデルの最適化をご覧ください。
セットアップ
入力テンソルと出力テンソルの両方を量子化するには、TensorFlow r2.3 で追加された API を使用する必要があります。
End of explanation
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
Explanation: MNIST モデルをビルドする
MNIST データセットから、数字を分類する単純なモデルを構築します。
モデルのトレーニングは 5 エポックしか行わないため、時間はかかりません。およそ 98% の精度に達します。
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: TensorFlow Lite モデルに変換する
次に、TFLiteConverter API を使用してトレーニング済みのモデルを TensorFlow Lite 形式に変換し、様々な程度で量子化を適用できます。
量子化のバージョンの中には、一部のデータを浮動小数点数のフォーマットに残すものもあることに注意してください。そのため、以下のセクションでは、完全に int8 または unit8 データのモデルを得るまで、各オプションの量子化の量を増加しています。(各セクションのコードは、オプションごとにすべての量子化のステップを確認できるように、重複していることに注意してください。)
まず、量子化なしで変換されたモデルです。
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
Explanation: TensorFlow Lite モデルになってはいますが、すべてのパラメータデータには 32 ビット浮動小数点値が使用されています。
ダイナミックレンジ量子化による変換
では、デフォルトの optimizations フラグを有効にして、すべての固定パラメータ(重みなど)を量子化しましょう。
End of explanation
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
Explanation: 重みが量子化されたためモデルは多少小さくなりましたが、他の変数データはまだ浮動小数点数フォーマットのままです。
浮動小数点数フォールバック量子化による変換
変数データ(モデル入力/出力やレイヤー間の中間データ)を量子化するには、RepresentativeDataset を指定する必要があります。これは、代表値を示すのに十分な大きさのある一連の入力データを提供するジェネレータ関数です。コンバータがすべての変数データのダイナミックレンジを推測できるようにします。(トレーニングや評価データセットとは異なり、このデータセットは一意である必要はありません。)複数の入力をサポートするために、それぞれの代表的なデータポイントはリストで、リストの要素はインデックスに従ってモデルに供給されます。
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
Explanation: すべての重みと変数データが量子化されたため、元の TensorFlow Lite モデルにくらべてはるかに小さくなりました。
ただし、従来的に浮動小数点数モデルの入力テンソルと出力テンソルを使用するアプリケーションとの互換性を維持するために、TensorFlow Lite Converter は、モデルの入力テンソルと出力テンソルを浮動小数点数に残しています。
End of explanation
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
Explanation: 互換性を考慮すれば、大抵においては良いことではありますが、Edge TPU など、整数ベースの演算のみを実行するデバイスには対応していません。
さらに、上記のプロセスでは、TensorFlow Lite が演算用の量子化の実装を含まない場合、その演算を浮動小数点数フォーマットに残す可能性があります。このストラテジーでは、より小さく効率的なモデルを得られるように変換を完了することが可能ですが、やはり、整数のみのハードウェアには対応しません。(この MNIST モデルのすべての op には量子化された実装が含まれています。)
そこで、エンドツーエンドの整数限定モデルを確実に得られるよう、パラメータをいくつか追加する必要があります。
整数限定量子化による変換
入力テンソルと出力テンソルを量子化し、量子化できない演算に遭遇したらコンバーターがエラーをスローするようにするには、追加パラメータをいくつか使用して、モデルを変換し直します。
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
Explanation: 内部の量子化は上記と同じままですが、入力テンソルと出力テンソルが整数フォーマットになっているのがわかります。
End of explanation
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
Explanation: これで、モデルの入力テンソルと出力テンソルに整数データを強いようする整数量子化モデルを得られました。Edge TPU などの整数限定ハードウェアに対応しています。
モデルをファイルとして保存する
モデルを他のデバイスにデプロイするには、.tflite ファイルが必要となります。そこで、変換されたモデルをファイルに保存して、以下の推論を実行する際に読み込んでみましょう。
End of explanation
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
Explanation: TensorFlow Lite モデルを実行する
では、TensorFlow Lite Interpreter を使用して推論を実行し、モデルの精度を比較しましょう。
まず、特定のモデルと画像を使って推論を実行し、予測を返す関数が必要です。
End of explanation
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
Explanation: 1つの画像に対してモデルを検証する
次に、浮動小数点数モデルと量子化モデルのパフォーマンスを比較します。
tflite_model_file は、浮動小数点数データを持つ元の TensorFlow Lite モデルです。
tflite_model_quant_file は、整数限定量子化を使用して変換した最後のモデルです(入力と出力に unit8 データを使用します)。
もう一つ、予測を出力する関数を作成しましょう。
End of explanation
test_model(tflite_model_file, test_image_index, model_type="Float")
Explanation: では、浮動小数点数モデルをテストします。
End of explanation
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
Explanation: 今度は量子化されたモデル(uint8データを使用する)を検証します:
End of explanation
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
Explanation: モデルを評価する
このチュートリアルの冒頭で読み込んだすテスト画像をすべて使用して、両方のモデルを実行しましょう。
End of explanation
evaluate_model(tflite_model_file, model_type="Float")
Explanation: 浮動小数点数モデルを評価します。
End of explanation
evaluate_model(tflite_model_quant_file, model_type="Quantized")
Explanation: uint8データを使用した完全に量子化されたモデルで評価を繰り返します:
End of explanation |
14,183 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have the following data frame: | Problem:
import pandas as pd
import io
import numpy as np
from scipy import stats
temp=u"""probegenes,sample1,sample2,sample3
1415777_at Pnliprp1,20,0.00,11
1415805_at Clps,17,0.00,55
1415884_at Cela3b,47,0.00,100"""
df = pd.read_csv(io.StringIO(temp),index_col='probegenes')
indices = [('1415777_at Pnliprp1', 'data'), ('1415777_at Pnliprp1', 'zscore'), ('1415805_at Clps', 'data'), ('1415805_at Clps', 'zscore'), ('1415884_at Cela3b', 'data'), ('1415884_at Cela3b', 'zscore')]
indices = pd.MultiIndex.from_tuples(indices)
df2 = pd.DataFrame(data=stats.zscore(df, axis = 0), index=df.index, columns=df.columns)
df3 = pd.concat([df, df2], axis=1).to_numpy().reshape(-1, 3)
result = pd.DataFrame(data=np.round(df3, 3), index=indices, columns=df.columns) |
14,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step1: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: Comment
Maybe the 75-25 model is overfitting
Maybe reducing the test set increase the chances of having a high proportion of outliers in this set
3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here
Step3: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results. | Python Code:
from sklearn import datasets, tree, metrics
from sklearn.cross_validation import train_test_split
import numpy as np
dt = tree.DecisionTreeClassifier()
iris = datasets.load_iris()
x = iris.data[:,2:]
y = iris.target
# 50% - 50%
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
y_pred=dt.predict(x_test)
print("50%-50%")
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y_test, y_pred)),"\nClassification report:")
print(metrics.classification_report(y_test,y_pred),"\n")
print(metrics.confusion_matrix(y_test,y_pred),"\n")
Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
y_pred=dt.predict(x_test)
print("75%-25%")
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y_test, y_pred)),"\n")
print(metrics.classification_report(y_test,y_pred),"\nClassification report:")
print(metrics.confusion_matrix(y_test,y_pred),"\n")
Explanation: 2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
End of explanation
cancer = datasets.load_breast_cancer()
print("Here are the attributes we have:\n", cancer['DESCR'][1200:3057])
x = cancer.data[:,2:] # the attributes
y = cancer.target # the target variable
example_data = [i for i in x[0]]
print("Here's the a sample of these 32 attributes (first data row):")
print(*example_data)
print("We're trying to predict if a subject has cancer or not. Here is a sample of the targets:", y[20:30])
Explanation: Comment
Maybe the 75-25 model is overfitting
Maybe reducing the test set increase the chances of having a high proportion of outliers in this set
3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)
dt = dt.fit(x_train,y_train)
y_pred=dt.predict(x_test)
print("50%-50%")
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y_test, y_pred)),"\nClassification report:")
print(metrics.classification_report(y_test,y_pred),"\n")
print(metrics.confusion_matrix(y_test,y_pred),"\n")
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.75,train_size=0.25)
dt = dt.fit(x_train,y_train)
y_pred=dt.predict(x_test)
print("75%-25%")
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y_test, y_pred)),"\nClassification report:")
print(metrics.classification_report(y_test,y_pred),"\n")
print(metrics.confusion_matrix(y_test,y_pred),"\n")
Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
End of explanation |
14,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, (None), name='learning_rate')
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False, alpha=0.1):
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
debug = False
training = True
with tf.variable_scope('discriminator', reuse=reuse):
random_normal_init = tf.random_normal_initializer(mean=0, stddev=0.02)
# Input layer is 28x28x3
# Radford & Metz suggest not doing normalization on the
# generator output layer or discriminator input layer
conv1 = tf.layers.conv2d(inputs=images,
filters=64,
kernel_size=5, # 5 means 5x5
strides=2, # 2 means 2x2,
padding='same',
kernel_initializer=random_normal_init,
activation=None)
# leaky relu activation
conv1 = tf.maximum(alpha * conv1, conv1)
if debug:
print("Expected shape: 14x14x64. conv1.shape: ", conv1.shape)
conv2 = tf.layers.conv2d(inputs=conv1,
filters=128,
kernel_size=5, # 5 means 5x5
strides=2, # 2 means 2x2,
padding='same',
kernel_initializer=random_normal_init,
activation=None)
# batch normalization
conv2 = tf.layers.batch_normalization(conv2,
training=training)
# leaky relu activation
conv2 = tf.maximum(alpha * conv2, conv2)
if debug:
print("Expected shape: 7x7x128. conv2.shape: ", conv2.shape)
conv3 = tf.layers.conv2d(inputs=conv2,
filters=256,
kernel_size=5, # 5 means 5x5
strides=2, # 2 means 2x2,
padding='same',
kernel_initializer=random_normal_init,
activation=None)
# batch normalization
conv3 = tf.layers.batch_normalization(conv3,
training=training)
# leaky relu activation
conv3 = tf.maximum(alpha * conv3, conv3)
if debug:
print("Expected shape: 4x4x256. conv3.shape: ", conv3.shape)
flatten = tf.contrib.layers.flatten(conv3)
# Only looking for one probability of a real image
logits = tf.layers.dense(inputs=flatten,
units=1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
End of explanation
def generator(z, out_channel_dim, is_train=True, alpha=0.1):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
debug = False
with tf.variable_scope('generator', reuse=not is_train):
random_normal_init = tf.random_normal_initializer(mean=0, stddev=0.02)
# First fully connected layer
# has linear activation that will be turned into a leaky relu later
fullyconn = tf.layers.dense(inputs=z,
kernel_initializer=random_normal_init,
units=4*4*256)
# reshape to fit conv
fullyconn = tf.reshape(tensor=fullyconn,
shape=(-1, 4, 4, 256))
# normalize
fullyconn = tf.layers.batch_normalization(fullyconn,
training=is_train)
# leaky relu activation
fullyconn = tf.maximum(alpha * fullyconn, fullyconn)
if debug:
print("Expected shape: 4x4x256. fullyconn.shape: ", fullyconn.shape)
# convolutional transpose
conv1 = tf.layers.conv2d_transpose(inputs=fullyconn,
filters=128,
kernel_size=4, # 5 means 5x5
strides=1, # 2 means 2x2,
padding='valid',
kernel_initializer=random_normal_init)
# batch normalization
conv1 = tf.layers.batch_normalization(conv1,
training=is_train)
# leaky relu activation
conv1 = tf.maximum(alpha * conv1, conv1)
if debug:
print("Expected shape: 14x14x256. conv1.shape: ", conv1.shape)
# convolutional transpose
conv2 = tf.layers.conv2d_transpose(inputs=conv1,
filters=64,
kernel_size=5, # 5 means 5x5
strides=2, # 2 means 2x2,
padding='same',
kernel_initializer=random_normal_init)
# batch normalization
conv2 = tf.layers.batch_normalization(conv2,
training=is_train)
# leaky relu activation
conv2 = tf.maximum(alpha * conv2, conv2)
if debug:
print("Expected shape: 28x28x128. conv2.shape: ", conv2.shape)
# Output transpose layer, 28x28x3
logits = tf.layers.conv2d_transpose(inputs=conv2,
filters=out_channel_dim,
kernel_size=5,
strides=2,
padding='same',
kernel_initializer=random_normal_init)
# Radford & Metz suggest not doing normalization on the
# generator output layer or discriminator input layer
if debug:
print("Expected shape: 28x28x3. logits.shape: ", logits.shape)
#logits = tf.image.resize_images(logits, [28,28])
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
# Source: DCGAN Exercise
g_model = generator(input_z, out_channel_dim, is_train=True, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1=0.5):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Separate out variables for discriminator vs generator
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
# Source: DCGAN notebook
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.1, beta1=0.5):
real_width = real_size[1]
real_height = real_size[2]
real_depth = real_size[3]
self.input_real, self.input_z, self.learning_rate = model_inputs(real_width, real_height, real_depth, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_depth, alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode, print_every=50, show_every=100, figsize=(5,5)):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
debug = False
net = GAN(data_shape, z_dim, learning_rate, 0.2, beta1)
data_min = -1.0
data_max = 1.0
saver = tf.train.Saver()
# Random Noise for sampling from generator
sample_z = np.random.uniform(data_min, data_max, size=(72, z_dim))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
steps += 1
batch_images = batch_images * 2
# Training noise for generator
batch_z = np.random.uniform(data_min, data_max, size=(batch_size, z_dim))
# optimize discriminator
if debug:
print("optimizing discriminator...")
d_session = sess.run(net.d_opt, feed_dict={net.input_real: batch_images, net.input_z: batch_z, net.learning_rate: learning_rate})
# optimize generator
if debug:
print("optimizing generator...")
g_session = sess.run(net.g_opt, feed_dict={net.input_real: batch_images, net.input_z: batch_z, net.learning_rate: learning_rate})
# optimize generator
if debug:
print("optimizing generator...")
g_session = sess.run(net.g_opt, feed_dict={net.input_real: batch_images, net.input_z: batch_z, net.learning_rate: learning_rate})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: batch_images})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{} Step {}...".format(epoch_i+1, epoch_count, steps),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
show_generator_output(sess, 10, net.input_z, data_shape[3], data_image_mode)
return losses, samples
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
tf.reset_default_graph()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.0002
beta1 = 0.5
tf.reset_default_graph()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
14,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps_per_seq):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
# ie n_seq = 10, n_steps_per_sew = 2, batch_size = 20
batch_size = n_seqs * n_steps_per_seq
# ie arr= 40, over 20, so 2 batches
n_batches = len(arr) // batch_size
# Keep only enough characters to make full batches
# n_batches = 2 * batch_size = 20 = 40??
# why not simply use len(arr)?
arr = arr[ : n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps_per_seq):
# The features
x = arr[ :, n: n + n_steps_per_seq]
# The targets, shifted by one
y = np.zeros_like(x)
y[ :, : -1], y[ : , -1] = x[ :, 1: ], x[ :, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
with tf.name_scope('inputs'):
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, (batch_size, num_steps), name="inputs")
targets = tf.placeholder(tf.int32, (batch_size, num_steps), name="targets")
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def single_lstm_cell(lstm_size, keep_prob):
with tf.name_scope("RNN_layers"):
lstm = tf.contrib.rnn.NASCell(lstm_size, reuse = tf.get_variable_scope().reuse)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob)
return drop
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Stack up multiple LSTM layers, for deep learning
with tf.name_scope("RNN_layers"):
rnn_cells = tf.contrib.rnn.MultiRNNCell([single_lstm_cell(lstm_size, keep_prob) for _ in range(num_layers)],
state_is_tuple = True)
with tf.name_scope("RNN_init_state"):
initial_state = rnn_cells.zero_state(batch_size, tf.float32)
return rnn_cells, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
# ie t1 = t1 = [[1, 2, 3], [4, 5, 6]]
# t2 = [[7, 8, 9], [10, 11, 12]]
# tf.concat([t1, t2], 1) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(lstm_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal( (in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros( out_size ))
# tensorboard
tf.summary.histogram("softmax_w", softmax_w)
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name="predictions")
tf.summary.histogram("predictions", out)
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape( y_one_hot, logits.get_shape() )
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
# tensorboard
tf.summary.scalar('loss', loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
x_one_hot = tf.one_hot(self.inputs, num_classes, name="x_one_hot")
with tf.name_scope("RNN_layers"):
# Build the LSTM cell
cells, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cells, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 64 # Sequences per batch
num_steps = 128 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
with tf.Session() as sess:
sess.run( tf.global_variables_initializer() )
file_writer = tf.summary.FileWriter( './logs/2', sess.graph)
# model = build_rnn( len(vocab), batch_size, num_steps, learning_rate, lstm_size, num_layers)
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
TensorBoard
End of explanation
epochs = 3
# Save every N iterations
save_every_n = 200
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Tensoboard
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
merged = tf.summary.merge_all() # Tensorboard
summary, batch_loss, new_state, _ = sess.run([merged, model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
train_writer.add_summary(summary, counter)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
14,187 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning with H2O - Tutorial 4b
Step1: <br>
Step2: <br>
Define Search Criteria for Random Grid Search
Step3: <br>
Step 1
Step4: <br>
Step 2
Step5: <br>
Model Stacking
Step6: <br>
Comparison of Model Performance on Test Data | Python Code:
# Import all required modules
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.deeplearning import H2ODeepLearningEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
# Start and connect to a local H2O cluster
h2o.init(nthreads = -1)
Explanation: Machine Learning with H2O - Tutorial 4b: Classification Models (Ensembles)
<hr>
Objective:
This tutorial explains how to create stacked ensembles of classification models for better out-of-bag performance.
<hr>
Titanic Dataset:
Source: https://www.kaggle.com/c/titanic/data
<hr>
Steps:
Build GBM models using random grid search and extract the best one.
Build DRF models using random grid search and extract the best one.
Use model stacking to combining different models.
<hr>
Full Technical Reference:
http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/modeling.html
http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html
<br>
End of explanation
# Import Titanic data (local CSV)
titanic = h2o.import_file("kaggle_titanic.csv")
titanic.head(5)
# Convert 'Survived' and 'Pclass' to categorical values
titanic['Survived'] = titanic['Survived'].asfactor()
titanic['Pclass'] = titanic['Pclass'].asfactor()
# Define features (or predictors) manually
features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
# Split the H2O data frame into training/test sets
# so we can evaluate out-of-bag performance
titanic_split = titanic.split_frame(ratios = [0.8], seed = 1234)
titanic_train = titanic_split[0] # using 80% for training
titanic_test = titanic_split[1] # using the rest 20% for out-of-bag evaluation
titanic_train.shape
titanic_test.shape
Explanation: <br>
End of explanation
# define the criteria for random grid search
search_criteria = {'strategy': "RandomDiscrete",
'max_models': 9,
'seed': 1234}
Explanation: <br>
Define Search Criteria for Random Grid Search
End of explanation
# define the range of hyper-parameters for GBM grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.7, 0.8, 0.9],
'col_sample_rate': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up GBM grid search
# Add a seed for reproducibility
gbm_rand_grid = H2OGridSearch(
H2OGradientBoostingEstimator(
model_id = 'gbm_rand_grid',
seed = 1234,
ntrees = 10000,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True, # needed for stacked ensembles
stopping_metric = 'mse',
stopping_rounds = 15,
score_tree_interval = 1),
search_criteria = search_criteria, # full grid search
hyper_params = hyper_params)
# Use .train() to start the grid search
gbm_rand_grid.train(x = features,
y = 'Survived',
training_frame = titanic_train)
# Sort and show the grid search results
gbm_rand_grid_sorted = gbm_rand_grid.get_grid(sort_by='auc', decreasing=True)
print(gbm_rand_grid_sorted)
# Extract the best model from random grid search
best_gbm_model_id = gbm_rand_grid_sorted.model_ids[0]
best_gbm_from_rand_grid = h2o.get_model(best_gbm_model_id)
best_gbm_from_rand_grid.summary()
Explanation: <br>
Step 1: Build GBM Models using Random Grid Search and Extract the Best Model
End of explanation
# define the range of hyper-parameters for DRF grid search
# 27 combinations in total
hyper_params = {'sample_rate': [0.5, 0.6, 0.7],
'col_sample_rate_per_tree': [0.7, 0.8, 0.9],
'max_depth': [3, 5, 7]}
# Set up DRF grid search
# Add a seed for reproducibility
drf_rand_grid = H2OGridSearch(
H2ORandomForestEstimator(
model_id = 'drf_rand_grid',
seed = 1234,
ntrees = 200,
nfolds = 5,
fold_assignment = "Modulo", # needed for stacked ensembles
keep_cross_validation_predictions = True), # needed for stacked ensembles
search_criteria = search_criteria, # full grid search
hyper_params = hyper_params)
# Use .train() to start the grid search
drf_rand_grid.train(x = features,
y = 'Survived',
training_frame = titanic_train)
# Sort and show the grid search results
drf_rand_grid_sorted = drf_rand_grid.get_grid(sort_by='auc', decreasing=True)
print(drf_rand_grid_sorted)
# Extract the best model from random grid search
best_drf_model_id = drf_rand_grid_sorted.model_ids[0]
best_drf_from_rand_grid = h2o.get_model(best_drf_model_id)
best_drf_from_rand_grid.summary()
Explanation: <br>
Step 2: Build DRF Models using Random Grid Search and Extract the Best Model
End of explanation
# Define a list of models to be stacked
# i.e. best model from each grid
all_ids = [best_gbm_model_id, best_drf_model_id]
# Set up Stacked Ensemble
ensemble = H2OStackedEnsembleEstimator(model_id = "my_ensemble",
base_models = all_ids)
# use .train to start model stacking
# GLM as the default metalearner
ensemble.train(x = features,
y = 'Survived',
training_frame = titanic_train)
Explanation: <br>
Model Stacking
End of explanation
print('Best GBM model from Grid (AUC) : ', best_gbm_from_rand_grid.model_performance(titanic_test).auc())
print('Best DRF model from Grid (AUC) : ', best_drf_from_rand_grid.model_performance(titanic_test).auc())
print('Stacked Ensembles (AUC) : ', ensemble.model_performance(titanic_test).auc())
Explanation: <br>
Comparison of Model Performance on Test Data
End of explanation |
14,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistically meaningful charts
Seaborn
The next module we will explore is Seaborn. Seaborn is a Python visualization library based on matplotlib. It is built on top of matplotlib and tightly integrated with the PyData stack, including support for numpy and pandas data structures and statistical routines from scipy and statsmodels. It provides a high-level interface for drawing attractive statistical graphics... emphasis on STATISTICS. You don't want to use Seaborn as a general purpose charting libray.
http
Step1: Load up some test data to play with
Step2: Plotting linear regression
http
Step3: Plotting logistic regression
http | Python Code:
%matplotlib inline
import matplotlib
import seaborn as sns
import pandas as pd
import numpy as np
import warnings
sns.set(color_codes=True)
warnings.filterwarnings("ignore")
Explanation: Statistically meaningful charts
Seaborn
The next module we will explore is Seaborn. Seaborn is a Python visualization library based on matplotlib. It is built on top of matplotlib and tightly integrated with the PyData stack, including support for numpy and pandas data structures and statistical routines from scipy and statsmodels. It provides a high-level interface for drawing attractive statistical graphics... emphasis on STATISTICS. You don't want to use Seaborn as a general purpose charting libray.
http://web.stanford.edu/~mwaskom/software/seaborn/index.html
End of explanation
tips = pd.read_csv('input/tips.csv')
tips['tip_percent'] = (tips['tip'] / tips['total_bill'] * 100)
tips.head()
tips.describe()
Explanation: Load up some test data to play with
End of explanation
sns.jointplot("total_bill", "tip_percent", tips, kind='reg');
sns.lmplot(x="total_bill", y="tip_percent", hue="ordered_alc_bev", data=tips)
sns.lmplot(x="total_bill", y="tip_percent", col="day", data=tips, aspect=.5)
sns.lmplot(x="total_bill", y="tip_percent", hue='ordered_alc_bev', col="time", row='gender', size=6, data=tips);
Explanation: Plotting linear regression
http://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html
End of explanation
# Let's add some calculated columns
tips['tip_above_avg'] = np.where(tips['tip_percent'] >= tips['tip_percent'].mean(), 1, 0)
tips.replace({'Yes': 1, 'No': 0}, inplace=True)
tips.head()
sns.lmplot(x="tip_percent", y="ordered_alc_bev", col='gender', data=tips, logistic=True)
sns.lmplot(x="ordered_alc_bev", y="tip_above_avg", col='gender', data=tips, logistic=True)
Explanation: Plotting logistic regression
http://web.stanford.edu/~mwaskom/software/seaborn/tutorial/regression.html
End of explanation |
14,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to construct a simple convo network
Step1: calculate the number of parameters of a convo layer
Step2: The output layer shape is
Step3: There are 756,560 total parameters. That's a HUGE amount! Here's how we calculate it
Step4: simple cnn in tf
TensorFlow provides the tf.nn.conv2d() and tf.nn.bias_add() functions to create your own convolutional layers.
Step5: Max pooling
Max pooling takes the filter size, say 2x2, and grabs the max value and passes it on. This often leads to more accurate models, but is computationally expensive as the stride is typically 1.
Pooling layers decrease the size of the output and prevent overfitting.
The tf.nn.max_pool() function performs max pooling with the ksize parameter as the size of the filter and the strides parameter as the length of the stride. 2x2 filters with a stride of 2x2 are common in practice.
The ksize and strides parameters are structured as 4-element lists, with each element corresponding to a dimension of the input tensor ([batch, height, width, channels]). For both ksize and strides, the batch and channel dimensions are typically set to 1.
Step6: Recently, pooling layers have fallen out of favor. Some reasons are
Step8: 1x1 convulutions
Inception modules
This performs a few convulutions at the same time and stacks them together. It generally works better then a simple convultion layer.
Quiz
Step9: Calculate the output height and width using the formula
Step11: using a pooling layer in tensorflow | Python Code:
input = tf.placeholder(tf.float32, (None, 32, 32, 3))
filter_weights = tf.Variable(tf.truncated_normal((8, 8, 3, 20))) # (height, width, input_depth, output_depth)
filter_bias = tf.Variable(tf.zeros(20))
strides = [1, 2, 2, 1] # (batch, height, width, depth)
padding = 'VALID'
conv = tf.nn.conv2d(input, filter_weights, strides, padding) + filter_bias
Explanation: How to construct a simple convo network:
End of explanation
# convo layer output layer shape:
# new_height = (input_height - filter_height + 2 * P)/S + 1
# new_width = (input_width - filter_width + 2 * P)/S + 1
((32 - 8 + 2*1) / 2 + 1), ((32 - 8 + 2*1) / 2 + 1), 20
Explanation: calculate the number of parameters of a convo layer:
H = height, W = width, D = depth
We have an input of shape 32x32x3 (HxWxD)
20 filters of shape 8x8x3 (HxWxD)
A stride of 2 for both the height and width (S)
Zero padding of size 1 (P)
The output layer shape can be calcuated using:
new_height = (input_height - filter_height + 2 * P)/S + 1
new_width = (input_width - filter_width + 2 * P)/S + 1
End of explanation
# parameters in a convo layer
(8*8*3 +1) * (14*14*20)
Explanation: The output layer shape is: 14x14x20 (HxWxD).
The new depth is equal to the number of filters, which is 20.
End of explanation
((8*8*3)+1) * 20 + 20
Explanation: There are 756,560 total parameters. That's a HUGE amount! Here's how we calculate it:
8 * 8 * 3 is the number of weights, we add 1 for the bias. Remember, each weight is assigned to every single part of the output (14 * 14 * 20). So we multiply these two numbers together and we get the final answer.
Calculate the number of parameters in the convolutional layer, if every neuron in the output layer shares its parameters with every other neuron in its same channel.
This is the number of parameters actually used in a convolution layer tf.nn.conv2d().
End of explanation
# Output depth
k_output = 64
# Image Properties
image_width = 10
image_height = 10
color_channels = 3
# Convolution filter
filter_size_width = 5
filter_size_height = 5
# Input/Image
input = tf.placeholder(
tf.float32,
shape=[None, image_height, image_width, color_channels])
# Weight and bias
weight = tf.Variable(tf.truncated_normal(
[filter_size_height, filter_size_width, color_channels, k_output]))
bias = tf.Variable(tf.zeros(k_output))
# Apply Convolution
conv_layer = tf.nn.conv2d(input, weight, strides=[1, 2, 2, 1], padding='SAME')
# Add bias
conv_layer = tf.nn.bias_add(conv_layer, bias)
# Apply activation function
conv_layer = tf.nn.relu(conv_layer)
Explanation: simple cnn in tf
TensorFlow provides the tf.nn.conv2d() and tf.nn.bias_add() functions to create your own convolutional layers.
End of explanation
# Apply Max Pooling
conv_layer = tf.nn.max_pool(
conv_layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
Explanation: Max pooling
Max pooling takes the filter size, say 2x2, and grabs the max value and passes it on. This often leads to more accurate models, but is computationally expensive as the stride is typically 1.
Pooling layers decrease the size of the output and prevent overfitting.
The tf.nn.max_pool() function performs max pooling with the ksize parameter as the size of the filter and the strides parameter as the length of the stride. 2x2 filters with a stride of 2x2 are common in practice.
The ksize and strides parameters are structured as 4-element lists, with each element corresponding to a dimension of the input tensor ([batch, height, width, channels]). For both ksize and strides, the batch and channel dimensions are typically set to 1.
End of explanation
input = tf.placeholder(tf.float32, (None, 4, 4, 5))
filter_shape = [1, 2, 2, 1]
strides = [1, 2, 2, 1]
padding = 'VALID'
pool = tf.nn.max_pool(input, filter_shape, strides, padding)
Explanation: Recently, pooling layers have fallen out of favor. Some reasons are:
Recent datasets are so big and complex we're more concerned about underfitting.
Dropout is a much better regularizer.
Pooling results in a loss of information. Think about the max pooling operation as an example. We only keep the largest of n numbers, thereby disregarding n-1 numbers completely.
A pooling layer example:
End of explanation
Setup the strides, padding and filter weight/bias such that
the output shape is (1, 2, 2, 3).
import tensorflow as tf
import numpy as np
# `tf.nn.conv2d` requires the input be 4D (batch_size, height, width, depth)
# (1, 4, 4, 1)
x = np.array([
[0, 1, 0.5, 10],
[2, 2.5, 1, -8],
[4, 0, 5, 6],
[15, 1, 2, 3]], dtype=np.float32).reshape((1, 4, 4, 1))
X = tf.constant(x)
x.shape
def conv2d(input):
# Filter (weights and bias)
# The shape of the filter weight is (height, width, input_depth, output_depth)
# The shape of the filter bias is (output_depth,)
# TODO: Define the filter weights `F_W` and filter bias `F_b`.
# NOTE: Remember to wrap them in `tf.Variable`, they are trainable parameters after all.
F_W = tf.Variable(tf.truncated_normal([2,2,1,3]))
F_b = tf.Variable(tf.zeros(3))
# TODO: Set the stride for each dimension (batch_size, height, width, depth)
strides = [1, 2, 2, 1]
# TODO: set the padding, either 'VALID' or 'SAME'.
padding = 'SAME'
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#conv2d
# `tf.nn.conv2d` does not include the bias computation so we have to add it ourselves after.
return tf.nn.conv2d(input, F_W, strides, padding) + F_b
out = conv2d(X)
# udacity's solution
def conv2d(input):
# Filter (weights and bias)
F_W = tf.Variable(tf.truncated_normal((2, 2, 1, 3))) # (height, width, input_depth, output_depth)
F_b = tf.Variable(tf.zeros(3)) # (output_depth)
strides = [1, 2, 2, 1]
padding = 'VALID'
return tf.nn.conv2d(input, F_W, strides, padding) + F_b
Explanation: 1x1 convulutions
Inception modules
This performs a few convulutions at the same time and stacks them together. It generally works better then a simple convultion layer.
Quiz
End of explanation
out_height = math.ceil(float(4 - 2 + 1) / float(2))
out_width = math.ceil(float(4 - 2 + 1) / float(2))
out_height, out_width
Explanation: Calculate the output height and width using the formula:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
End of explanation
Set the values to `strides` and `ksize` such that
the output shape after pooling is (1, 2, 2, 1).
# `tf.nn.max_pool` requires the input be 4D (batch_size, height, width, depth)
# (1, 4, 4, 1)
x = np.array([
[0, 1, 0.5, 10],
[2, 2.5, 1, -8],
[4, 0, 5, 6],
[15, 1, 2, 3]], dtype=np.float32).reshape((1, 4, 4, 1))
X = tf.constant(x)
def maxpool(input):
# TODO: Set the ksize (filter size) for each dimension (batch_size, height, width, depth)
ksize = [1, 2, 2, 1]
# TODO: Set the stride for each dimension (batch_size, height, width, depth)
strides = [1, 2, 2, 1]
# TODO: set the padding, either 'VALID' or 'SAME'.
padding = 'VALID'
# https://www.tensorflow.org/versions/r0.11/api_docs/python/nn.html#max_pool
return tf.nn.max_pool(input, ksize, strides, padding)
out = maxpool(X)
# udacity solution
def maxpool(input):
ksize = [1, 2, 2, 1]
strides = [1, 2, 2, 1]
padding = 'VALID'
return tf.nn.max_pool(input, ksize, strides, padding)
Explanation: using a pooling layer in tensorflow
End of explanation |
14,190 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have fitted a k-means algorithm on 5000+ samples using the python scikit-learn library. I want to have the 50 samples closest (data, not just index) to a cluster center "p" (e.g. p=2) as an output, here "p" means the p^th center. How do I perform this task? | Problem:
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
p, X = load_data()
assert type(X) == np.ndarray
km = KMeans()
km.fit(X)
d = km.transform(X)[:, p]
indexes = np.argsort(d)[::][:50]
closest_50_samples = X[indexes] |
14,191 | Given the following text description, write Python code to implement the functionality described.
Description:
Given a non-empty list of integers lst. add the even elements that are at odd indices..
Examples:
add([4, 2, 6, 7]) ==> 2
| Python Code:
def add(lst):
return sum([lst[i] for i in range(1, len(lst), 2) if lst[i]%2 == 0]) |
14,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still | Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = x * theta
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon=1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon
thetaminus = theta - epsilon
Jplus = forward_propagation(x, thetaplus)
Jminus = forward_propagation(x, thetaminus)
gradapprox = (Jplus - Jminus) / 2 / epsilon
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
difference = np.linalg.norm(grad - gradapprox) / (np.linalg.norm(grad) + np.linalg.norm(gradapprox))
### END CODE HERE ###
if difference < 1e-7:
print("The gradient is correct!")
else:
print("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1. / m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1. / m * np.dot(dZ3, A2.T)
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1. / m * np.dot(dZ2, A1.T) # Should not multiply by 2
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1. / m * np.dot(dZ1, X.T)
db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True) # Should not multiply by 4
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
parametersplus = np.copy(parameters_values)
parametersplus[i] += epsilon
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(parametersplus))
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
parametersminus = np.copy(parameters_values)
parametersminus[i] -= epsilon
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(parametersminus))
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / 2 / epsilon
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
difference = np.linalg.norm(grad - gradapprox) / (np.linalg.norm(grad) + np.linalg.norm(gradapprox))
### END CODE HERE ###
if difference > 1e-7:
print("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation |
14,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Traverse a Square - Part N - Functions
tricky because of scoping - need to think carefully about this....
In the previous notebook on this topic, we had described how to use a loop that could run the same block of code multiple times so that we could avoid repeating ourselves in the construction of program to drive a mobile robot along a square shaped trajectory.
One possible form of the program was as follows - note the use of varaiables to specify several parameter values
Step1: The function definition takes the following, minimal form
Step2: How did you get on? Could you work out how to use the functions?
Here's how I used them | Python Code:
import time
def myFunction():
print("Hello...")
#Pause awhile...
time.sleep(2)
print("...world!")
#call the function - note the brackets!
myFunction()
Explanation: Traverse a Square - Part N - Functions
tricky because of scoping - need to think carefully about this....
In the previous notebook on this topic, we had described how to use a loop that could run the same block of code multiple times so that we could avoid repeating ourselves in the construction of program to drive a mobile robot along a square shaped trajectory.
One possible form of the program was as follows - note the use of varaiables to specify several parameter values:
```python
import time
side_speed=2
side_length_time=1
turn_speed=1.8
turn_time=0.45
number_of_sides=4
for side in range(number_of_sides):
#side
robot.move_forward(side_speed)
time.sleep(side_length_time)
#turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
```
Looking at the program, we have grouped the lines of code inside the loop into two separate meaningful groups:
one group of lines to move the robot in a straight line along one side of the square;
one group of lines to turn the robo through ninety degrees.
We can further abstract the program into one in which we define some custom functions that can be called by name and that will execute a code block captured within the function definition.
Here's an example:
End of explanation
%run 'Set-up.ipynb'
%run 'Loading scenes.ipynb'
%run 'vrep_models/PioneerP3DX.ipynb'
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
#Your code - using functions - here
Explanation: The function definition takes the following, minimal form:
python
def NAME_OF_FUNCTION():
#Code block - there must be at least one line of code
#That said, we can use a null (do nothing) statement
pass
Set up the notebook to use the simulator and see if you can think of a way to use functions to call the lines of code that control the robot.
The function definitions should appear before the loop.
End of explanation
%%vrepsim '../scenes/OU_Pioneer.ttt' PioneerP3DX
import time
side_speed=2
side_length_time=1
turn_speed=1.8
turn_time=0.45
number_of_sides=4
def traverse_side():
pass
def turn():
pass
for side in range(number_of_sides):
#side
robot.move_forward(side_speed)
time.sleep(side_length_time)
#turn
robot.rotate_left(turn_speed)
time.sleep(turn_time)
Explanation: How did you get on? Could you work out how to use the functions?
Here's how I used them:
End of explanation |
14,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
14,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with Application Programming Interfaces (APIs)
APIs make it easy to collect data for text mining and machine learning projects. In this workshop, we'll learn how to collect data from APIs, query APIs to filter the data we fetch, and process the data that APIs return to extract information of interest. Let's get started!
What is an API?
Every API is an interface to a database. When we interact with the API, we fetch data from the database. This series of relationships could be visualized as follows
Step1: As we can see, the data we get using the requests library is identical to the data we got when requesting the same URL through our web browser. Let's practice parsing the JSON data from DataMuse with Python.
You may know that we can "iterate through" (or examine one-by-one) the dictionaries in the list above using a <b>for loop</b>
Step2: This for loop lets us examine each item in data one by one. In the first pass through the for loop above, the value of i is the first dictionary in data
Step3: To return the value assigned to a key, we can use the following syntax
Step4: Reviewing JSON Parsing
Using the skills discussed above, see if you can print the name addresses of each item in the following list
Step5: <details style='margin
Step6: <details style='margin
Step7: <details style='margin
Step8: If one attempts to access the value assigned to each "profession" key in the list above, Python will throw a KeyError error
Step9: The most general way to solve this problem in Python is to use try and except syntax. These two commands work together to say
Step10: The try and except syntax above will allow you to handle any error that occurs in Python gracefully. In the case of dictionaries, however, we can use the .get method to handle KeyError events
Step11: Reviewing Missing Data
See if you can update the code below to return "Unknown" if there is no specified author key
Step12: <details style='margin
Step13: We can see this response has five top-level keys
Step14: If you examine the URL above, you can see ?andtext=horse. This is the syntax used by the Chronicling America API to allow users to find texts that contain a given word or phrase. Let's run the query again, but this time using a search word of personal interest to you
Step15: Working with Paginated API Responses
Great! We've successfully fetched 20 responses from the server. However, if we run print(data['totalItems']), we'll see that there are many more hits for our query
Step16: To access the other hits for our query, we must request each "page" of results individually. A page of API results is just like a page of results displayed in a web browser--each page contains a fixed number of results. In the case of a Google search, each page contains 10 search results. In the case of the Chronicling America API, each page contains 20 results. Each API is different, but the concept of pagination is the same.
To paginate through the pages of results for our query, the Chronicling America documentation tells us, we can add a query parameter page=n to our url, where n is the index position of the page we want to fetch. For example, we can request
Step17: Notice that the first argument passed to range()—the start value—is inclusive, but the second argument—the end value—is exclusive!
Let's use the range function to create a list of pages we wish to fetch
Step18: Now results will be a list of dictionaries we can parse to fetch multiple pages of data!
Reviewing Pagination
Given what we covered above, see if you can fetch pages 100 through 200 (both inclusive) of results that include the word "horses". Beware of "off-by-one" problems!
Step19: <details style='margin | Python Code:
import requests
url = 'https://api.datamuse.com/words?sp=t??k'
# get the content at the requested url
response = requests.get(url)
# get the JSON data in the response object
data = response.json()
print(data)
Explanation: Getting Started with Application Programming Interfaces (APIs)
APIs make it easy to collect data for text mining and machine learning projects. In this workshop, we'll learn how to collect data from APIs, query APIs to filter the data we fetch, and process the data that APIs return to extract information of interest. Let's get started!
What is an API?
Every API is an interface to a database. When we interact with the API, we fetch data from the database. This series of relationships could be visualized as follows:
<div style='text-align: center'><img src='./assets/api-schematic.png'><a href='https://medium.com/swlh/building-a-restful-api-with-rails-4b98dc76bf9c'>Image by Hector Polanco</a></div>
As this illustration indicates, a user tells an API what kind of data they would like, then the API fetches that data from a database and sends it to the user. Let's see this in action in our first example below.
Solving Crossword Puzzles with the DataMuse API
For our first example, let's consider the DataMuse API. This API allows us to interact with a database that contains information about words. Using the DataMuse API, we can find words with certain constraints. Suppose for example we are completing the puzzle below and wish to find words that would fit into <i>2 Down</i>:
<div style='text-align: center'>
<img style='width:140px;margin: 20px auto;' src='./assets/crossword-puzzle.png'>
</div>
To solve for <i>2 Down</i>, we need a four-letter word that begins with S and ends with D. How can we use the DataMuse API to get a list of words that meet our criteria?
Since the structure for URL queries varies somewhat depending on the API, we recommend starting out by reading the documentation for the API you're using. To see what URL patterns you can use to query the DataMuse database, take a look at: the DataMuse API documentation:
<table class='text'>
<thead>
<tr>
<th style='text-align: left' style='text-align: left'>In order to find...</th>
<th style='text-align: left'>Use <a href='https://api.datamuse.com'>https://api.datamuse.com</a> + ...</th>
</tr>
</thead>
<tbody>
<tr>
<td style='text-align: left'>words that start with <i>t</i>, end in <i>k</i>, and have two letters in between</td>
<td style='text-align: left'><a href="https://api.datamuse.com/words?sp=t??k" class="apilink">/words?sp=t??k</a></td>
</tr>
<tr>
<td style='text-align: left'>words that rhyme with <i>forgetful</i></td>
<td style='text-align: left'><a href="https://api.datamuse.com/words?rel_rhy=forgetful" class="apilink">/words?rel_rhy=forgetful</a></td>
</tr>
<tr>
<td style='text-align: left'>words that end with the letter <i>a</i> that are related to <i>spoon</i></td>
<td style='text-align: left'><a href="https://api.datamuse.com/words?ml=spoon&sp=*a&max=10" class="apilink">/words?sp=*a&ml=spoon</a></td>
</tr>
</tbody>
</table>
Let's spend a minute analyzing these URLs. As we can see, all of the queries we can perform begin with "https://api.datamuse.com". We are then instructed to add "/words?" followed by a pattern specific to the kind of data we wish to obtain:
For four letter words that begin with <i>t</i> and end with <i>k</i>, use https://api.datamuse.com/words?sp=t??k
For words that rhyme with <i>forgetful</i>, use https://api.datamuse.com/words?rel_rhy=forgetful
To find adjectives related to <i>crossword</i>, use https://api.datamuse.com/words?rel_jjb=crossword
Now that we're seen a few examples of how to write URL queries for the DataMuse API, try to solve our crossword challenge. Add a new box cell (by clicking on the + sign in the upper left corner) and in it, write a URL that would return 4-letter words that begin with S and end with D.
<details style='margin: 10px 25px'>
<summary>Solution</summary>
<a href='https://api.datamuse.com/words?sp=s??d' target='_blank'>https://api.datamuse.com/words?sp=s??d</a>
</details>
Examining API Data
Now let's examine the data the API returns. If you visit <a href='https://api.datamuse.com/words?sp=s??d' target='_blank'>https://api.datamuse.com/words?sp=s??d</a> in a web browser, you will see something similar to the following:
<a href='https://api.datamuse.com/words?sp=s??d' target='_blank'>
<img src='./assets/s--d.png'>
</a>
This output is an example of JSON, a popular data format often used by APIs. To make that JSON data easier to analyze, let's paste it into a "JSON prettifier", such as <a href='https://www.jsonformatter.io/' target='_blank'>https://www.jsonformatter.io/</a>. If you do so, you should see something similar to the following:
<a href='https://www.jsonformatter.io/' target='_blank' style='margin:10px 0; display: block'>
<img src='./assets/pretty-json.png'>
</a>
The data on the right is identical to the data on the left, except the data on the right is formatted (or "prettified") to be a little easier to read. Looking at that right-hand column, we see a few types of data that we will need to understand in order to be able to work more with this API.
Square Braces = List
First, we can see that the JSON data begins (and eventually ends) with <b>square braces</b>: [ ]. In JSON, square braces denote a <b>list</b>. In the case of the DataMuse API, each item in the list contains information about a word that meets the constraints specified in the URL we requested.
Squiggly Braces = Dictionary
Next, we can see that each of the words in our list is wrapped with <b>squiggly braces</b>: { }. In JSON, squiggly brace denote a <b>dictionary</b> (which are also sometimes referred to as an "object"). Dictionaries are in turn comprised of two kinds of things: <b>keys</b> (located to the left of a colon) and <b>values</b> (to the right of a colon):
<img src='assets/key-value.png' style='height:80px'>
The dictionary above contains two key:value pairs. The first has the key "word" and the value "shed". The second has the key "score" and the value 3010. We say that each of these values is "assigned" to the key it belongs to. So the value assigned to "word" is "shed", and the value assigned to "score" is 3010.
Quotation Marks = String
Examining the keys and the value "shed" above, we see that each is wrapped with <b>quotation marks</b>: " ". In JSON, quotation marks denote a <b>string</b> (a fancy word for "text data"). When viewing our JSON data using https://www.jsonformatter.io/, we can see that strings that are values inside a dictionary are colored red, but other text editors may not color code strings at all.
Whole Numbers = Integers
Finally, whole numbers such as the number 3010 (colored teal above) are called <b>integers</b>. Integers should not be confused with numbers that contain decimals, which are instead called <b>floats</b>.
Reviewing JSON Data
To review what we've learned about JSON so far, see if you can answer the following questions.
Consider the crossword puzzle we examined earlier. What URL would you use if you wanted to find words that could fit in <i>4 Down</i> below?
<div style='text-align: center'>
<img style='width:140px;margin: 20px auto;' src='./assets/crossword-puzzle.png'>
</div>
<details style='margin: 10px 25px'>
<summary>Solution</summary>
<a href='https://api.datamuse.com/words?sp=?o?' target='_blank'>https://api.datamuse.com/words?sp=?o?</a>
</details>
Consider the dictionary below. How many key:value pairs are there in this dictionary? What are they?
{
"word": "send",
"score": 1718
}
<details style='margin: 10px 25px'>
<summary>Solution</summary>
There are 2 key:value pairs. The first has the key "word" and the value "send". The second has the key "score" and the value 1718.
</details>
What are the data types of the elements in the following dictionary:
{
"name": "Allen Ginsberg",
"age": 28
"zip-code": "06510"
}
<details style='margin: 10px 25px'>
<summary>Solution</summary>
The keys in this dictionary are all strings. The values assigned to the "name" and "zip-code" keys are also a strings (note that "06510" is in quotation marks), while the value assigned to the "age" key is an integer.
</details>
Fetching JSON Data with Python
Now that we've explored JSON data by manually requesting data from an API, let's automate that process with Python. To do so, we can use the requests library:
End of explanation
for i in data:
print(i)
Explanation: As we can see, the data we get using the requests library is identical to the data we got when requesting the same URL through our web browser. Let's practice parsing the JSON data from DataMuse with Python.
You may know that we can "iterate through" (or examine one-by-one) the dictionaries in the list above using a <b>for loop</b>:
End of explanation
i = {'score': 3314, 'word': 'talk'}
Explanation: This for loop lets us examine each item in data one by one. In the first pass through the for loop above, the value of i is the first dictionary in data:
End of explanation
print(i['word'])
Explanation: To return the value assigned to a key, we can use the following syntax: print(dictionary_name[key_name]). For example, to access the value assigned to the 'word' key in dictionary i, we could type:
End of explanation
data = [
{
"name": "id labore ex et quam laborum",
"email": "[email protected]"
},
{
"name": "quo vero reiciendis velit similique earum",
"email": "[email protected]"
},
{
"name": "odio adipisci rerum aut animi",
"email": "[email protected]"
},
{
"name": "alias odio sit",
"email": "[email protected]"
}
]
# type your answer here
Explanation: Reviewing JSON Parsing
Using the skills discussed above, see if you can print the name addresses of each item in the following list:
End of explanation
items = [
{
"id": 1,
"title": "quidem molestiae enim"
},
{
"id": 2,
"title": "sunt qui excepturi placeat culpa"
},
{
"id": 3,
"title": "omnis laborum odio"
},
{
"id": 4,
"title": "non esse culpa molestiae omnis sed optio"
},
{
"id": 5,
"title": "eaque aut omnis a"
}
]
# type your answer here
Explanation: <details style='margin: 10px 25px'>
<summary>Solution</summary>
<code style='display: block'>
for i in data:
print(i['name'])
</code>
</details>
Next, see if you can print the title of each item in the following list:
End of explanation
data = [
{
"name": "Leanne Graham",
"company": {
"name": "Google",
"catchPhrase": "Multi-layered client-server neural-net",
}
},
{
"name": "Ervin Howell",
"company": {
"name": "Twitter",
"catchPhrase": "Proactive didactic contingency",
}
},
{
"name": "Clementine Bauch",
"company": {
"name": "Facebook",
"catchPhrase": "Face to face bifurcated interface",
}
}
]
# type your answer here
Explanation: <details style='margin: 10px 25px'>
<summary>Solution</summary>
<code style='display: block'>
for i in items:
print(i['title'])
</code>
</details>
Bonus Round! See if you can print each company name in the list below. Here's a hint--you will need to access the value assigned to the company key, then you'll need to access the value assigned to the name key...
End of explanation
data = [
{
"name": "Thomas Boyle",
"profession": "chemist"
},
{
"name": "Margaret Cavendish",
"profession": "novelist"
},
{
"name": "Athanasius Kirchir"
}
]
Explanation: <details style='margin: 10px 25px'>
<summary>Solution</summary>
<code style='display: block'>
for i in items:
print(i['company']['name'])
</code>
</details>
Handling Missing Data
Oftentimes when we use an API, the data that is sent back to us is inconsistent. For example, we might find that some records are missing a key:value pair that other records contain, as in the following example:
End of explanation
# this code will return an error
for i in data:
print(i['profession'])
Explanation: If one attempts to access the value assigned to each "profession" key in the list above, Python will throw a KeyError error:
End of explanation
for i in data:
# try to run the following lines on each item in data
try:
# display the profession of the current item in data
print(i['profession'])
# if any of those lines throw an error, run the following for the given line
except:
# if the current item in data could not be processed, go to the next item in the list of data
continue
Explanation: The most general way to solve this problem in Python is to use try and except syntax. These two commands work together to say: "attempt to run the code inside the try block, and if that attempt fails, run the code inside the except block". Let's see an example of this below:
End of explanation
data = {"name": "Athanasius Kirchir"}
# try to get the value assigned to the "profession" key
# if that key doesn't exist, return "undefined"
value = data.get("profession", "undefined")
print(value)
Explanation: The try and except syntax above will allow you to handle any error that occurs in Python gracefully. In the case of dictionaries, however, we can use the .get method to handle KeyError events:
End of explanation
data = [
{
"author": "Sir Arthur Ignatius Conan Doyle",
"title": "The Mystery of Cloomber"
},
{
"author": "Dame Agatha Christie",
"title": "The Mousetrap"
},
{
"title": "Frankenstein"
}
]
for i in data:
print(i['author'])
# type your answer here
Explanation: Reviewing Missing Data
See if you can update the code below to return "Unknown" if there is no specified author key:
End of explanation
{
"totalItems": 6253723,
"endIndex": 20,
"startIndex": 1,
"itemsPerPage": 20,
"items": [
{
"sequence": 10,
"county": [
"Cook County"
],
"edition": None,
"frequency": "Daily (except Sunday and holidays)",
"id": "/lccn/sn83045487/1913-04-07/ed-1/seq-10/",
"subject": [
"Chicago (Ill.)--Newspapers.",
"Illinois--Chicago.--fast--(OCoLC)fst01204048"
],
"city": [
"Chicago"
],
"date": "19130407",
"title": "The day book. [volume]",
"end_year": 1917,
"note": [
"\"An adless daily newspaper.\"",
"Archived issues are available in digital format as part of the Library of Congress Chronicling America online collection.",
"Available on microfilm;",
"Description based on: Nov. 1, 1911.",
"Issue for <Nov. 24, 1911> lacks vol., no., and chronological designation.",
"Issue for Nov. 4, 1911 erroneously designated as Oct. 4, 1911.",
"Issue for v. 3, no. 290 (Sept. 7, 1914) erroneously designated as v. 3, no. 300 (Sept. 7, 1914). The error in numbering continues.",
"Issue for v. 5, no. 214 (June 7, 1916) erroneously designated as v. 5, no. 214 (June 6, 1916).",
"Issue for v. 5, no. 7 (Oct. 5, 1915) erroneously designated as v. 5, no. 7 (Sept. 5, 1915).",
"Issues for <May 7-17, 1915> called also \"Moving Picture Edition.\"",
"Issues have no page numbering.",
"Saturdays have Noon and Final editions, Dec. 28, 1912-June 21, 1913; Saturdays have Noon and Last editions, June 28, 1913-<Dec. 13, 1913>; began issuing daily Noon and Last editions, Dec. 20, 1913-July 6, 1917.",
"Vol. 5, no. 36 (Nov. 6, 1915) issue called also \"Garment Workers' Special Edition.\"",
"Volume numbering begins with Nov. 20, 1911 issue."
],
"state": [
"Illinois"
],
"section_label": "",
"type": "page",
"place_of_publication": "Chicago, Ill.",
"start_year": 1911,
"edition_label": "",
"publisher": "N.D. Cochran",
"language": [
"English"
],
"alt_title": [],
"lccn": "sn83045487",
"country": "Illinois",
"ocr_eng": "MAKING A BALKY HORSE MOVE \" \"\nWant to knftw how to start a balky\nhorse?\nThat sounds like\"\" a foolish ques\ntion and it's been echoed thVough all\nthe ages since horses came into gen\neral -.use. Everybody who owns one\nwould like to know how to start a\nbalky horse. Here's a way that has\nnever failed': Take an ordinary bam\nboo fish pole long enough to reach\nthe head of the horse from the car\nriage seat. Attach a round wooden\nbolt crossways to the end' of the rod\n. A MEMORY TEST\nand fasten copper rivet heads in .each\nend of the bolt.\nThen attach the two wires of an\nelectric battery, pne to each rivet\nhead and let them run along the pole\nto the handle where they will be fas\ntened to ordinary binding posts. Run\nwires from your battery, under the\nSarriage seat, to the handle of the\npole. When the horse balks turn on\nyour currenl. and touch the horse be\nhind the ears with the rivet heads.\nHe'll -move -no matter how deter--mined\nhe has been not to do so.\n\"Oh, yes, old bellow, I. still remem\nber that $5\"I borrowed two years ago.\nDid you think 4'd forgotten.it?\"-'\n\"Not fromthe way' you?d succeed-.\nedmrddgmgn&4e5Ince.\"\nSHORT HEAVYWEIGHT\ngootT\ngoshM i",
"batch": "iune_foxtrot_ver01",
"title_normal": "day book.",
"url": "https://chroniclingamerica.loc.gov/lccn/sn83045487/1913-04-07/ed-1/seq-10.json",
"place": [
"Illinois--Cook County--Chicago"
],
"page": ""
}
]
}
pass
Explanation: <details style='margin: 10px 25px'>
<summary>Solution</summary>
<code style='display: block'>
for i in data:
print(i.get('author', 'Unknown'))
</code>
</details>
Fetching Bulk Data from the Chronicling America API
Now that we've covered some of the basics of APIs and JSON parsing in Python, let's move on to the Chronicling America API, which has more interesting data in a more challenging format.
To get started with the Chronicling America API, let's take a look at the API documentation. There we see that we can place a basic query for full-text data that includes the word "horse" with the following URL:
<a target='_blank' href='https://chroniclingamerica.loc.gov/search/pages/results/?andtext=horse&format=json' style='text-align: center; margin: 10px auto; display: block'>https://chroniclingamerica.loc.gov/search/pages/results/?andtext=horse&format=json
</a>
The JSON data sent back is more complex than the JSON we've seen so far:
End of explanation
import requests
# specify the url to query
url = 'https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=horse'
# retrieve text records that contain the word specified in the URL call
response = requests.get(url)
# get the json data from the response
data = response.json()
# access the list of items in the returned JSON data
items = data['items']
print(items)
Explanation: We can see this response has five top-level keys:
"totalItems": 6253723, # indicates the total number of hits for our search
"endIndex": 20, # indicates the number of the last search result displayed
"startIndex": 1, # indicates the number of the first search result displayed
"itemsPerPage": 20, # indicates the number of results displayed
"items": [] # contains the data of interest
Likewise, each item in the items list has several keys. Let's focus on three of them:
"date" # the publication date of this item in YYYYMMDD format
"title" # the title of the publication
"ocr_eng" # the full OCR text for this item
Given this structure, let's obtain a list of items from our first page of results with Python:
End of explanation
# specify your search term below
search_term = 'horse'
# specify the url to query
url = 'https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=' + search_term
# retrieve text records that contain a given word
response = requests.get(url)
# get the json data from the response
data = response.json()
# access the list of items in the returned JSON data
items = data['items']
# display the title of each item
for i in items:
print(i['title'])
Explanation: If you examine the URL above, you can see ?andtext=horse. This is the syntax used by the Chronicling America API to allow users to find texts that contain a given word or phrase. Let's run the query again, but this time using a search word of personal interest to you:
End of explanation
print(data['totalItems'])
Explanation: Working with Paginated API Responses
Great! We've successfully fetched 20 responses from the server. However, if we run print(data['totalItems']), we'll see that there are many more hits for our query:
End of explanation
# create a list of integers between 10 and 20 counting by 2
list(range(10, 20, 2))
Explanation: To access the other hits for our query, we must request each "page" of results individually. A page of API results is just like a page of results displayed in a web browser--each page contains a fixed number of results. In the case of a Google search, each page contains 10 search results. In the case of the Chronicling America API, each page contains 20 results. Each API is different, but the concept of pagination is the same.
To paginate through the pages of results for our query, the Chronicling America documentation tells us, we can add a query parameter page=n to our url, where n is the index position of the page we want to fetch. For example, we can request:
```
url for the first page for our search term
https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=horse&page=1
url for the second page for our search term
https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=horse&page=2
```
Let's paginate through responses with Python. To do so, we can make use of the range function, which returns a list of integers between x and y counting by z:
End of explanation
import requests
# create a list of results
results = []
# fetch each page number
for i in range(1, 100, 1):
# specify the url to query
url = 'https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=horse&page=' + str(i)
# fetch this page of results
response = requests.get(url)
# get the json from the response
data = response.json()
# add this page of results to our list of results
results.append(data)
Explanation: Notice that the first argument passed to range()—the start value—is inclusive, but the second argument—the end value—is exclusive!
Let's use the range function to create a list of pages we wish to fetch:
End of explanation
# type your answer here
Explanation: Now results will be a list of dictionaries we can parse to fetch multiple pages of data!
Reviewing Pagination
Given what we covered above, see if you can fetch pages 100 through 200 (both inclusive) of results that include the word "horses". Beware of "off-by-one" problems!
End of explanation
# iterate over each result
for result in results:
# iterate over each item in this result
for item in result['items']:
# store a unique filename for this item
filename = item['id']
# clean the filename
filename = filename.strip('/')
filename = filename.replace('/', '-') + '.txt'
# get the ocr data from the item
ocr = item.get('ocr_eng', '')
# open the output file in write mode and save the English OCR content
open(filename, 'w').write(ocr)
Explanation: <details style='margin: 10px 25px'>
<summary>Solution</summary>
<code style='display: block'>
results = []
for i in range(100, 201, 1):
url = 'https://chroniclingamerica.loc.gov/search/pages/results/?format=json&andtext=horse&page=' + str(i)
results.append(requests.get(url).json())
</code>
</details>
Saving API Responses
Finally, let's revisit the pattern for saving data to disk using Python in order to process each item and save its text content to a file.
End of explanation |
14,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Silicon Forest Math Series<br/>Oregon Curriculum Network
Introduction to Public Key Cryptography
Here in the Silicon Forest, we do not expect everyone to become a career computer programmer.
We do expect a lot of people will wish to program at some time in their career.
Coding skills give you the power to control machines and you might find appropriate and life-enhancing uses for this type of power.
To help you get the flavor of coding, we leverage concepts we expect you're already getting through your math courses.
In moving from pre-computer math, to computer math, and back again, we develop important conceptual bridges.
Generating the Prime Numbers
Lets look at a first such concept, that of a prime number.
The Fundamental Theorem of Arithmetic says every positive integer distills into a unique list of prime number factors. Duplicates are allowed.
But what are primes in the first place? Numbers with no factors other than themselves.
Step2: The above algorithm is known as "trial by division".
Keep track of all primes discovered so far, and test divide them, in increasing order, into a candidate number, until
Step3: How does Euclid's Method work? That's a great question and one your teacher should be able to explain. First see if you might figure it out for yourself...
Here's one explanation
Step5: Suppose we had asked for gcd(18, 81) instead? 18 is the remainder (no 81s go into it) whereas b was 81, so the while loop simply flips the two numbers around to give the example above.
The gcd function now gives us the means to compute totients and totatives of a number. The totatives of N are the strangers less than N, whereas the totient is the number of such strangers.
Step6: Where to go next is in the direction of Euler's Theorem, a generalization of Fermat's Little Theorem. The built-in pow(m, n, N) function will raise m to the n modulo N in an efficient manner.
Step7: Above we see repeating cycles of numbers, with the length of the cycles all dividing 16, the totient of the prime number 17.
pow(14, 2, 17) is 9, pow(14, 3, 17) is 7, and so on, coming back around the 14 at pow(14, 17, 17) where 17 is 1 modulo 16.
Numbers raised to any kth power modulo N, where k is 1 modulo the totient of N, end up staying the same number. For example, pow(m, (n * T(N)) + 1, N) == m for any n.
Step8: In public key cryptography, RSA in particular, a gigantic composite N is formed from two primes p and q.
N's totient will then be (p - 1) * (q - 1). For example if N = 17 * 23 (both primes) then T(N) = 16 * 22.
Step9: From this totient, we'll be able to find pairs (e, d) such that (e * d) modulo T(N) == 1.
We may find d, given e and T(N), by means of the Extended Euclidean Algorithm (xgcd below).
Raising some numeric message m to the eth power modulo N will encrypt the message, giving c.
Raising the encrypted message c to the dth power will cycle it back around to its starting value, thereby decrypting it.
c = pow(m, e, N)
m = pow(c, d, N)
where (e * d) % T(N) == 1.
For example | Python Code:
import pprint
def primes():
generate successive prime numbers (trial by division)
candidate = 1
_primes_so_far = [2] # first prime, only even prime
yield _primes_so_far[0] # share it!
while True:
candidate += 2 # check odds only from now on
for prev in _primes_so_far:
if prev**2 > candidate:
yield candidate # new prime!
_primes_so_far.append(candidate)
break
if not divmod(candidate, prev)[1]: # no remainder!
break # done looping
p = primes() # generator function based iterator
pp = pprint.PrettyPrinter(width=40, compact=True)
pp.pprint([next(p) for _ in range(30)]) # next 30 primes please!
Explanation: Silicon Forest Math Series<br/>Oregon Curriculum Network
Introduction to Public Key Cryptography
Here in the Silicon Forest, we do not expect everyone to become a career computer programmer.
We do expect a lot of people will wish to program at some time in their career.
Coding skills give you the power to control machines and you might find appropriate and life-enhancing uses for this type of power.
To help you get the flavor of coding, we leverage concepts we expect you're already getting through your math courses.
In moving from pre-computer math, to computer math, and back again, we develop important conceptual bridges.
Generating the Prime Numbers
Lets look at a first such concept, that of a prime number.
The Fundamental Theorem of Arithmetic says every positive integer distills into a unique list of prime number factors. Duplicates are allowed.
But what are primes in the first place? Numbers with no factors other than themselves.
End of explanation
def gcd(a, b):
while b:
a, b = b, a % b
return a
print(gcd(81, 18))
print(gcd(12, 44))
print(gcd(117, 17)) # strangers
Explanation: The above algorithm is known as "trial by division".
Keep track of all primes discovered so far, and test divide them, in increasing order, into a candidate number, until:
(A) either one of the primes goes evenly, in which case move on to the next odd
or
(B) until we know our candidate is a next prime, in which case yield it and append it to the growing list.
If we get passed the 2nd root of the candidate, we conclude no larger factor will work, as we would have encountered it already as the smaller of the factor pair.
Passing this 2nd root milestone triggers plan B. Then we advance to the next candidate, ad infinitum.
Python pauses at each yield statement however, handing control back to the calling sequence, in this case a "list comprehension" containing a next() function for advancing to the next yield.
Coprimes, Totatives, and the Totient of a Number
From here, we jump to the idea of numbers being coprime to one another. A synonym for coprime is "stranger." Given two ordinary positive integers, they're strangers if they have no prime factors in common. For that to be true, they'd have no shared factors at all (not counting 1).
Guido van Rossum, the inventor of Python, gives us a pretty little implementation of what's known as Euclid's Method, an algorithm that's thousands of years old. It'll find the largest factor any two numbers have in common (gcd = "greatest common divisor").
Here it is:
End of explanation
print(81 % 18) # 18 goes into
print(18 % 9) # so the new b becomes the answer
Explanation: How does Euclid's Method work? That's a great question and one your teacher should be able to explain. First see if you might figure it out for yourself...
Here's one explanation:
If a smaller number divides a larger one without remainder then we're done, and that will always happen when that smaller number is 1 if not before.
If there is a remainder, what then? Lets work through an example.
81 % 18 returns a remainder of 9 in the first cycle. 18 didn't go into 81 evenly but if another smaller number goes into both 9, the remainder, and 18, then we have our answer.
9 itself does the trick and we're done.
End of explanation
def totatives(N):
# list comprehension!
return [x for x in range(1,N) if gcd(x,N)==1] # strangers only
def T(N):
Returns the number of numbers between (1, N) that
have no factors in common with N: called the
'totient of N' (sometimes phi is used in the docs)
return len(totatives(N)) # how many strangers did we find?
print("Totient of 100:", T(100))
print("Totient of 1000:", T(1000))
Explanation: Suppose we had asked for gcd(18, 81) instead? 18 is the remainder (no 81s go into it) whereas b was 81, so the while loop simply flips the two numbers around to give the example above.
The gcd function now gives us the means to compute totients and totatives of a number. The totatives of N are the strangers less than N, whereas the totient is the number of such strangers.
End of explanation
def powers(N):
totient = T(N)
print("Totient of {}:".format(N), totient)
for t in totatives(N):
values = [pow(t, n, N) for n in range(totient + 1)]
cycle = values[:values.index(1, 1)] # first 1 after initial 1
print("{:>2}".format(len(cycle)), cycle)
powers(17)
Explanation: Where to go next is in the direction of Euler's Theorem, a generalization of Fermat's Little Theorem. The built-in pow(m, n, N) function will raise m to the n modulo N in an efficient manner.
End of explanation
from random import randint
def check(N):
totient = T(N)
for t in totatives(N):
n = randint(1, 10)
print(t, pow(t, (n * totient) + 1, N))
check(17)
Explanation: Above we see repeating cycles of numbers, with the length of the cycles all dividing 16, the totient of the prime number 17.
pow(14, 2, 17) is 9, pow(14, 3, 17) is 7, and so on, coming back around the 14 at pow(14, 17, 17) where 17 is 1 modulo 16.
Numbers raised to any kth power modulo N, where k is 1 modulo the totient of N, end up staying the same number. For example, pow(m, (n * T(N)) + 1, N) == m for any n.
End of explanation
p = 17
q = 23
T(p*q) == (p-1)*(q-1)
Explanation: In public key cryptography, RSA in particular, a gigantic composite N is formed from two primes p and q.
N's totient will then be (p - 1) * (q - 1). For example if N = 17 * 23 (both primes) then T(N) = 16 * 22.
End of explanation
p = 37975227936943673922808872755445627854565536638199
q = 40094690950920881030683735292761468389214899724061
RSA_100 = p * q
totient = (p - 1) * (q - 1)
# https://en.wikibooks.org/wiki/
# Algorithm_Implementation/Mathematics/
# Extended_Euclidean_algorithm
def xgcd(b, n):
x0, x1, y0, y1 = 1, 0, 0, 1
while n != 0:
q, b, n = b // n, n, b % n
x0, x1 = x1, x0 - q * x1
y0, y1 = y1, y0 - q * y1
return b, x0, y0
# x = mulinv(b) mod n, (x * b) % n == 1
def mulinv(b, n):
g, x, _ = xgcd(b, n)
if g == 1:
return x % n
e = 3
d = mulinv(e, totient)
print((e*d) % totient)
import binascii
m = int(binascii.hexlify(b"I'm a secret"), 16)
print(m) # decimal encoding of byte string
c = pow(m, e, RSA_100) # raise to eth power
print(c)
m = pow(c, d, RSA_100) # raise to dth power
print(m)
binascii.unhexlify(hex(m)[2:]) # m is back where we started.
Explanation: From this totient, we'll be able to find pairs (e, d) such that (e * d) modulo T(N) == 1.
We may find d, given e and T(N), by means of the Extended Euclidean Algorithm (xgcd below).
Raising some numeric message m to the eth power modulo N will encrypt the message, giving c.
Raising the encrypted message c to the dth power will cycle it back around to its starting value, thereby decrypting it.
c = pow(m, e, N)
m = pow(c, d, N)
where (e * d) % T(N) == 1.
For example:
End of explanation |
14,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
Step14: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras
Step15: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
Step16: Here's a few examples of the categories we just imported
Step17: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition
Step18: ...and here's the fully-connected definition.
Step19: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model
Step20: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
Step21: We'll learn about what these different blocks do later in the course. For now, it's enough to know that
Step22: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
Step23: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
Step24: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data
Step25: From here we can use exactly the same steps as before to look at predictions from the model.
Step26: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. | Python Code:
%matplotlib inline
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
path = "data/dogscats/"
#path = "data/dogscats/sample/"
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
import utils; reload(utils)
from utils import plots
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
vgg = Vgg16()
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
batches = vgg.get_batches(path+'train', batch_size=4)
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
imgs,labels = next(batches)
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
plots(imgs, titles=labels)
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
vgg.predict(imgs, True)
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
vgg.classes[:4]
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
vgg.finetune(batches)
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
classes[:5]
Explanation: Here's a few examples of the categories we just imported:
End of explanation
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
Explanation: ...and here's the fully-connected definition.
End of explanation
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
model = VGG_16()
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
batch_size = 4
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation |
14,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import modules
Step1: Load data
For this exercise, we will be using a dataset of housing prices in Boston during the 1970s. Python's super-awesome sklearn package already has the data we need to get started. Below is the command to load the data. The data is stored as a dictionary.
The 'DESCR' is a description of the data and the command for printing it is below. Note all the features we have to work with. From the dictionary, we need the data and the target variable (in this case, housing price). Store these as variables named "data" and "price", respectively. Once you have these, print their shapes to see all checks out with the DESCR. | Python Code:
from sklearn.datasets import load_boston
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import scale
from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
from sklearn.cross_validation import KFold
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Import modules
End of explanation
boston = load_boston()
print boston.DESCR
Explanation: Load data
For this exercise, we will be using a dataset of housing prices in Boston during the 1970s. Python's super-awesome sklearn package already has the data we need to get started. Below is the command to load the data. The data is stored as a dictionary.
The 'DESCR' is a description of the data and the command for printing it is below. Note all the features we have to work with. From the dictionary, we need the data and the target variable (in this case, housing price). Store these as variables named "data" and "price", respectively. Once you have these, print their shapes to see all checks out with the DESCR.
End of explanation |
14,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Classification Lab 1
In this lab we will learn how to generate synthetic data and how to apply various built-in classifiers to classify the data. The goal of this lab is to introduce you to a subset of classifcation methods and which methods perform better depending on the structure of the data. We will also see where data visulaization and background information can come in handy!
Step1: Python's scikit-learn (or sklearn) is a Machine Learning library equipped with simple and efficient tools for data mining and data analysis. In general, a learning problem consists of a set of $n$ samples of data and then tries to predict properties of unknown data.
If each sample is more than a single number (aka multivariate), it is said to have several attributes or features.
We will mainly focus on supervised learning, in which each input data has an associated label. If we have time, we will discuss unsupervised learning and what techniques are commonly used today.
Step2: make_classification generates a random classification problem where the number of class is user-specified. Later we will see make_moons and make_circles.
Step3: $K$-Nearest Neighbors
The first classifier we are going to consider is $k$-nearest neighbors. The idea behind the method is that the input consists of the $k$ closest training examples in the feature space. An object is classified by a majority vote of its neighbors, with the object being assignmed to the class most common among its k nearest neighbors, hence the name.
Activity! Get up!
Step4: Not bad! We see that the $k$-nearest neighbors algorithm was able to classify the unseen data with an accuracy of 90%!
Decision Tree
Next, we will take a look at Decision Tree Classification. A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (or feature), each branch represents the outcome of the test, and each leaf node represents a class label.
Step5: Even better! A Decision Tree Classifier was able to predict the unseen data labels with an accuracy of over 93%!
Different Structured Data
Now let's take a look at how these algorithms perform on data with different structural relationships. As you may think, sklearn's make_moons can artificial generate data whose class labeling's form "moons" around each other. Similarily, make_circles generates data grouped with circular structure.
The cells below provide a visual of the what these functions output. | Python Code:
# Import base libraries
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
Explanation: Intro to Classification Lab 1
In this lab we will learn how to generate synthetic data and how to apply various built-in classifiers to classify the data. The goal of this lab is to introduce you to a subset of classifcation methods and which methods perform better depending on the structure of the data. We will also see where data visulaization and background information can come in handy!
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
X, y = make_classification(n_classes = 2, n_samples = 100, \
n_features = 2, n_redundant = 0, \
random_state = 1, n_clusters_per_class = 1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape) # add some noise to the data
linearly_separable = (X, y)
Explanation: Python's scikit-learn (or sklearn) is a Machine Learning library equipped with simple and efficient tools for data mining and data analysis. In general, a learning problem consists of a set of $n$ samples of data and then tries to predict properties of unknown data.
If each sample is more than a single number (aka multivariate), it is said to have several attributes or features.
We will mainly focus on supervised learning, in which each input data has an associated label. If we have time, we will discuss unsupervised learning and what techniques are commonly used today.
End of explanation
figure = plt.figure( figsize=(27,9) )
color_map = plt.cm.RdBu #Red-Blue colormap
cm_bright = ListedColormap(['#FF0000','#0000FF'])
X = StandardScaler().fit_transform(X)# center and scale the data
# Split the data to reserve some for model validation
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 # for plotting
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 # y-axis, not to be confused with y labels
# Don't worry about the mesh grid now, we will use it to make pretty plots!
h = 0.02 # the mesh step size
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
plt.title("(Make Classification) Input Data", fontsize = 28)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
plt.axis([x_min, x_max, y_min, y_max])
plt.grid(True)
plt.xlabel('x', fontsize = 28), plt.ylabel('y', fontsize = 28)
plt.tick_params(labelsize = 20)
plt.show()
plt.close()
Explanation: make_classification generates a random classification problem where the number of class is user-specified. Later we will see make_moons and make_circles.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
names = ["Nearest Neighbors"]
#cl = [KNeighborsClassifier(3,weights = 'distance')] # Let's choose k=3
cl = [KNeighborsClassifier(3)] # Let's choose k=3
cl[0].fit(X_train,y_train)
score = cl[0].score(X_test,y_test)
# Plot the decision boundary for which we will assign a color to each class
# concatonate vectorized grid and compute probability estimates for the data
Z = cl[0].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1] # with confidence bds
#Z = cl[0].predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
plt.title('Decision Boundary', fontsize = 18)
plt.show()
plt.close()
# Adding the data...
plt.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
plt.title(names[0], fontsize = 20)
plt.show()
plt.close()
#np.set_printoptions(threshold='nan')
#cl[0].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1] # with confidence bds
print(score)
Explanation: $K$-Nearest Neighbors
The first classifier we are going to consider is $k$-nearest neighbors. The idea behind the method is that the input consists of the $k$ closest training examples in the feature space. An object is classified by a majority vote of its neighbors, with the object being assignmed to the class most common among its k nearest neighbors, hence the name.
Activity! Get up!
End of explanation
from IPython.display import Image
Image(url='https://image.slidesharecdn.com/decisiontree-151015165353-lva1-app6892/95/classification-using-decision-tree-12-638.jpg?cb=1444928106')
Image(url='http://help.prognoz.com/en/mergedProjects/Lib/img/decisiontree.gif')
from sklearn.tree import DecisionTreeClassifier
names.append("Decision Tree")
cl.append(DecisionTreeClassifier(max_depth=5))
cl[1].fit(X_train,y_train)
score = cl[1].score(X_test,y_test)
# Plot the decision boundary for which we will assign a color to each class
# concatonate vectorized grid and compute probability estimates for the data
#Z = cl[1].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1] # with confidence bds
Z = cl[1].predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
plt.title('Decision Boundary', fontsize = 18)
plt.show()
plt.close()
# Adding the data...
plt.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
plt.title(names[1], fontsize = 20)
plt.show()
plt.close()
print(score)
Explanation: Not bad! We see that the $k$-nearest neighbors algorithm was able to classify the unseen data with an accuracy of 90%!
Decision Tree
Next, we will take a look at Decision Tree Classification. A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (or feature), each branch represents the outcome of the test, and each leaf node represents a class label.
End of explanation
X, y = make_moons(noise = 0.3, random_state = 0, n_samples = 200)
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size = 0.4, random_state = 42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
plt.title("Make Moons Dataset", fontsize = 28)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
plt.axis([x_min, x_max, y_min, y_max])
plt.grid(True)
plt.xlabel('x', fontsize = 28), plt.ylabel('y', fontsize = 28)
plt.tick_params(labelsize = 20)
plt.show()
plt.close()
ax = plt.subplot(1,2,1)
# Fit make_moons dataset with k-Nearest Neighbors Classifier
cl[0].fit(X_train,y_train)
score = cl[0].score(X_test,y_test)
Z = cl[0].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1]
Z = Z.reshape(xx.shape)
ax.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
ax.set_title(names[0], fontsize = 20)
# Fit make_circles dataset with Decision Tree Classifier
ax2 = plt.subplot(1,2,2)
cl[1].fit(X_train,y_train)
score2 = cl[1].score(X_test,y_test)
Z = cl[1].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1]
Z = Z.reshape(xx.shape)
ax2.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
ax2.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
ax2.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
ax2.set_title(names[1], fontsize = 20)
plt.show()
plt.close()
print("Nearest Neighbors Score:",score)
print("Decision Tree Score:",score2)
X, y = make_circles(noise = 0.2, factor = 0.5, random_state = 1,n_samples = 200)
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size = 0.4, random_state = 42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
plt.title("Make Circles Dataset", fontsize = 28)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
plt.axis([x_min, x_max, y_min, y_max])
plt.grid(True)
plt.xlabel('x', fontsize = 28), plt.ylabel('y', fontsize = 28)
plt.tick_params(labelsize = 20)
plt.show()
plt.close()
ax = plt.subplot(1,2,1)
# Fit make_circles dataset with k-Nearest Neighbors Classifier
cl[0].fit(X_train,y_train)
score = cl[0].score(X_test,y_test)
Z = cl[0].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1]
Z = Z.reshape(xx.shape)
ax.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
ax.set_title(names[0], fontsize = 20)
# Fit make_circles dataset with Decision Tree Classifier
ax2 = plt.subplot(1,2,2)
cl[1].fit(X_train,y_train)
score2 = cl[1].score(X_test,y_test)
Z = cl[1].predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1]
Z = Z.reshape(xx.shape)
ax2.contourf(xx,yy,Z, cmap = color_map, alpha= 0.8)
ax2.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
ax2.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.4)
ax2.set_title(names[1], fontsize = 20)
plt.show()
plt.close()
print("Nearest Neighbors Score:",score)
print("Decision Tree Score:",score2)
Explanation: Even better! A Decision Tree Classifier was able to predict the unseen data labels with an accuracy of over 93%!
Different Structured Data
Now let's take a look at how these algorithms perform on data with different structural relationships. As you may think, sklearn's make_moons can artificial generate data whose class labeling's form "moons" around each other. Similarily, make_circles generates data grouped with circular structure.
The cells below provide a visual of the what these functions output.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.