path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
examples/iter8/progressive_rollout/separate_sdeps/abtest.ipynb | ###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
###Markdown
Progressive Rollouts using Two Seldon DeploymentsIn this example we will AB Test two Iris models: an SKLearn model and an XGBOOST model.We will run a progressive rollout allowing Iter8 to control the traffic to the two Seldon Deployments and gradually move traffic to the best model. Install Depenendcies * Istio * Seldon Core * Seldon Core Analytics * Iter8 You can create a Kind cluster with all dependencies installed with [Ansible](https://www.ansible.com/) with: ``` pip install ansible openshift ansible-galaxy collection install git+https://github.com/SeldonIO/ansible-k8s-collection.git,v0.1.0 ``` Then from `example/iter8` folder run: ``` ansible-playbook playbooks/iter8.yml ``` Create ABTest with Two Seldon Deployments
###Code
!cat baseline.yaml
!kubectl apply -f baseline.yaml
!cat candidate.yaml
!kubectl apply -f candidate.yaml
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-baseline
!kubectl wait --for condition=ready --timeout=600s pods --all -n ns-candidate
###Output
pod/iris-default-0-classifier-7fff869d67-g5qnh condition met
###Markdown
Create Virtual Service to Split Traffic
###Code
!cat routing-rule.yaml
!kubectl apply -f routing-rule.yaml
###Output
virtualservice.networking.istio.io/routing-rule created
###Markdown
Create some load on models.We will send reqeusts which will be split by the Seldon AB Test as well as random feedback to both models with feedback favouring the candidate
###Code
!cat fortio.yaml
!URL_VALUE="http://$(kubectl -n istio-system get svc istio-ingressgateway -o jsonpath='{.spec.clusterIP}')" && \
sed "s+URL_VALUE+${URL_VALUE}+g" fortio.yaml | kubectl apply -f -
!kubectl wait --for condition=ready --timeout=600s pods --all -n default
###Output
pod/fortio-irisv1-rewards-t5drl condition met
pod/fortio-irisv2-rewards-rb9k8 condition met
pod/fortio-requests-fkp95 condition met
###Markdown
Create Metrics to evaluate These are a standard set of metrics we use in all examples.
###Code
!cat ../../metrics.yaml
!kubectl create -f ../../metrics.yaml
!kubectl get metrics -n iter8-seldon
###Output
NAME TYPE DESCRIPTION
95th-percentile-tail-latency Gauge 95th percentile tail latency
error-count Counter Number of error responses
error-rate Gauge Fraction of requests with error responses
mean-latency Gauge Mean latency
request-count Counter Number of requests
user-engagement Gauge Number of feedback requests
###Markdown
Create Progressive Rollout Experiment * Run 15 iterations with 5 second gaps between default and candidate models * Both models must pass objectives * winnder will be chosen based on user engagement metric
###Code
!cat experiment.yaml
!kubectl create -f experiment.yaml
###Output
experiment.iter8.tools/quickstart-exp created
###Markdown
Monitor ExperimentDownload iter8ctl. ```GO111MODULE=on GOBIN=/usr/local/bin go get github.com/iter8-tools/[email protected]```Then:```while clear; do kubectl get experiment quickstart-exp -o yaml | iter8ctl describe -f -; sleep 8; done```By the end you should see the xgboost candidate model is promoted.
###Code
!kubectl wait experiment quickstart-exp --for=condition=Completed --timeout=300s
!kubectl get experiment quickstart-exp
###Output
NAME TYPE TARGET STAGE COMPLETED ITERATIONS MESSAGE
quickstart-exp A/B iris Completed 10 ExperimentCompleted: Experiment Completed
###Markdown
Cleanup
###Code
!kubectl delete -f fortio.yaml
!kubectl delete -f experiment.yaml
!kubectl delete -f ../../metrics.yaml
!kubectl delete -f routing-rule.yaml
!kubectl delete -f baseline.yaml
!kubectl delete -f candidate.yaml
###Output
job.batch "fortio-requests" deleted
job.batch "fortio-irisv1-rewards" deleted
job.batch "fortio-irisv2-rewards" deleted
experiment.iter8.tools "quickstart-exp" deleted
namespace "iter8-seldon" deleted
metric.iter8.tools "95th-percentile-tail-latency" deleted
metric.iter8.tools "error-count" deleted
metric.iter8.tools "error-rate" deleted
metric.iter8.tools "mean-latency" deleted
metric.iter8.tools "request-count" deleted
metric.iter8.tools "user-engagement" deleted
virtualservice.networking.istio.io "routing-rule" deleted
namespace "ns-baseline" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
namespace "ns-candidate" deleted
seldondeployment.machinelearning.seldon.io "iris" deleted
|
ch3/02-keras-sequential_api.ipynb | ###Markdown
Using Keras Sequential API-----------------------------`keras` is the implementation of the [Keras API specification](https://keras.io)
###Code
import keras
keras.__version__
###Output
_____no_output_____
###Markdown
`tf.keras` is the TensorFlow's implementation of the [Keras API specification](https://keras.io)`tf.keras` can run any Keras-compatible code.Be careful that `tf.keras` version in the latest [TensorFlow](https://www.tensorflow.org/) release might not be the same as the latest `keras`version from [PyPi](https://pypi.org/).
###Code
import tensorflow as tf
from tensorflow import keras
keras.__version__
import tensorflow as tf
from tensorflow.keras.layers import Dense
###Output
_____no_output_____
###Markdown
Let's start with creating a sequential model by passing a list of layer instances as an array to the constructor
###Code
model = tf.keras.Sequential([
# Add a fully connected layer with 1024 units to the model
tf.keras.layers.Dense(1024, input_dim=64),
# Add an activation layer with TanH activation function
tf.keras.layers.Activation('tanh'),
# Add a fully connected layer with 256 units to the model
tf.keras.layers.Dense(256),
# Add an activation layer with ReLU activation function
tf.keras.layers.Activation('relu'),
# Add a fully connected layer with 10 units to the model
tf.keras.layers.Dense(10),
# Add an activation layer with softmax activation function
tf.keras.layers.Activation('softmax')
])
###Output
_____no_output_____
###Markdown
Let's check out what the model summary looks like:
###Code
# Display Model Summary
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 1024) 66560
_________________________________________________________________
activation (Activation) (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 256) 262400
_________________________________________________________________
activation_1 (Activation) (None, 256) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 2570
_________________________________________________________________
activation_2 (Activation) (None, 10) 0
=================================================================
Total params: 331,530
Trainable params: 331,530
Non-trainable params: 0
_________________________________________________________________
###Markdown
Another way to create a sequential model is to instantiate a Sequential class and after that add layers via the .add() method.
###Code
model = tf.keras.Sequential()
# Add a fully connected layer with 1024 units to the model
model.add(tf.keras.layers.Dense(1024, input_dim=64))
# Add an activation layer with TanH activation function
model.add(tf.keras.layers.Activation('tanH'))
# Add a fully connected layer with 256 units to the model
model.add(tf.keras.layers.Dense(256))
# Add an activation layer with ReLU activation function
model.add(tf.keras.layers.Activation('relu'))
# Add a fuly connected Layer with 10 units to the model
model.add(tf.keras.layers.Dense(10))
# Add an activation layer with softmax activation function
model.add(tf.keras.layers.Activation('softmax'))
###Output
_____no_output_____
###Markdown
Let's check out what the model summary looks like:
###Code
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 1024) 66560
_________________________________________________________________
activation_3 (Activation) (None, 1024) 0
_________________________________________________________________
dense_4 (Dense) (None, 256) 262400
_________________________________________________________________
activation_4 (Activation) (None, 256) 0
_________________________________________________________________
dense_5 (Dense) (None, 10) 2570
_________________________________________________________________
activation_5 (Activation) (None, 10) 0
=================================================================
Total params: 331,530
Trainable params: 331,530
Non-trainable params: 0
_________________________________________________________________
###Markdown
Let us take a closer look at the layer configuration :- The `activation function` decides, whether a neuron should be activated or not
###Code
# Creation of a dense layer with a sigmoid activation function:
Dense(256, activation='sigmoid')
# Or:
Dense(256, activation=tf.keras.activations.sigmoid)
###Output
_____no_output_____
###Markdown
- The initial weights are defined by setting `kernel_initializer` and `bias_initializer` parameters.
###Code
# A dense layer with a kernel initialized to a truncated normal distribution:
Dense(256, kernel_initializer='random_normal')
# A dense layer with a bias vector initialized with a constant value of 5.0:
Dense(256, bias_initializer=tf.keras.initializers.Constant(value=5))
###Output
_____no_output_____
###Markdown
- The `kernel_regularizer` and `bias_regularizer` are regularizers
###Code
# A dense layer with L1 regularization of factor 0.01 applied to the kernel matrix:
Dense(256, kernel_regularizer=tf.keras.regularizers.l1(0.01))
# A dense layer with L2 regularization of factor 0.01 applied to the bias vector:
Dense(256, bias_regularizer=tf.keras.regularizers.l2(0.01))
###Output
_____no_output_____
###Markdown
Specifying the input shapeThe argument `input_dim` doesn't contain the `batch_size` because Keras ignores it. The model should be able to deal with any batch size.
###Code
Dense(256, input_dim=(64))
###Output
_____no_output_____
###Markdown
However, we can force the batch_size with the `batch_size` argument.
###Code
Dense(256, input_dim=(64), batch_size=10)
###Output
_____no_output_____
###Markdown
Creation of the 3 toy datasets
###Code
import numpy as np
data = np.random.random((2000, 64))
labels = np.random.random((2000, 10))
val_data = np.random.random((500, 64))
val_labels = np.random.random((500, 10))
test_data = np.random.random((500, 64))
test_labels = np.random.random((500, 10))
###Output
_____no_output_____
###Markdown
Compilation
###Code
# Compile a model using adam optimizer
# for categorical cross entropy loss and categorical accuracy metric.
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"]
)
###Output
_____no_output_____
###Markdown
Training from Numpy data :
###Code
model.fit(data, labels, epochs=10, batch_size=50,
validation_data=(val_data, val_labels))
###Output
Epoch 1/10
40/40 [==============================] - 0s 5ms/step - loss: 43.4399 - accuracy: 0.0915 - val_loss: 117.7406 - val_accuracy: 0.0840
Epoch 2/10
40/40 [==============================] - 0s 3ms/step - loss: 258.7852 - accuracy: 0.0915 - val_loss: 226.4188 - val_accuracy: 0.1120
Epoch 3/10
40/40 [==============================] - 0s 3ms/step - loss: 203.4489 - accuracy: 0.1060 - val_loss: 297.2030 - val_accuracy: 0.0940
Epoch 4/10
40/40 [==============================] - 0s 3ms/step - loss: 614.5685 - accuracy: 0.1105 - val_loss: 1034.5183 - val_accuracy: 0.1120
Epoch 5/10
40/40 [==============================] - 0s 3ms/step - loss: 1314.0310 - accuracy: 0.1000 - val_loss: 1319.0709 - val_accuracy: 0.0880
Epoch 6/10
40/40 [==============================] - 0s 3ms/step - loss: 1916.9198 - accuracy: 0.0925 - val_loss: 2958.2812 - val_accuracy: 0.0880
Epoch 7/10
40/40 [==============================] - 0s 2ms/step - loss: 3135.3259 - accuracy: 0.0840 - val_loss: 3423.9438 - val_accuracy: 0.1040
Epoch 8/10
40/40 [==============================] - 0s 2ms/step - loss: 3564.3628 - accuracy: 0.1040 - val_loss: 4568.0073 - val_accuracy: 0.0980
Epoch 9/10
40/40 [==============================] - 0s 2ms/step - loss: 5086.2075 - accuracy: 0.0940 - val_loss: 5656.1157 - val_accuracy: 0.1100
Epoch 10/10
40/40 [==============================] - 0s 3ms/step - loss: 5816.4668 - accuracy: 0.0980 - val_loss: 7082.7422 - val_accuracy: 0.1080
###Markdown
Evaluation: returns the loss value & metrics values for the model in test mode.
###Code
model.evaluate(test_data, test_labels, batch_size=50)
###Output
10/10 [==============================] - 0s 2ms/step - loss: 6922.4644 - accuracy: 0.1080
###Markdown
Prediction
###Code
# Prediction
result = model.predict(test_data, batch_size=50)
print(result.shape)
###Output
_____no_output_____ |
_notebooks/2020-04-15-Data Science Project imporvement, using LightGBM to gain more accuracy.ipynb | ###Markdown
Data Science Project imporvement, using LightGBM to gain more accuracy and no need to One-Hot Encoding> Using LightGBM native categorical feature support for Adult Dataset - toc: true - badges: true- comments: true- categories: [LightGBM] OverviewWe have done the last Data Science Project with U.S. Adult Income Dataset with around `86%` model accuracy, which can go into the top level accuracy in [Kaggle](https://www.kaggle.com) competition. we have done the following steps: 1. Understand the business problem. 2. EDA(Exploratory Data Analysis): Look through and investigate the overall dataset, visualize it with matplotlib and finding any missing value and outliers. 3. Data cleaning: impute the missing and outliers value. 4. Baseline model: Dummy classifier gave us `75%` accuracy as baseline, meaning that anything below `75%` accuracy, the model do nothing better than flipping a coin, and above this value, the model have some skill to classify the labels. 5. Model evaluate and fine turn: we have evaluate `Support Vector Machine`; `RamdonForestClassifier`; `BaggingClassifier`; `GradientBoostingClassifier` and `Neural Network`, the best performance model is `GradientBoostingClassifier` which providing `86%` of accuracy. Today we are going to use another light weight, powerful, and fast algorithem: [lightGBM](https://github.com/microsoft/LightGBM), open source at 2017, and now maintainced by [Microsoft](https://www.microsoft.com/en-sg)As the `EDA` was already done by the last blog, we will just skip it and move directly into today's topic. Get the imports done and read the dataset***
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import lightgbm as lgb
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv('datasets/adult.csv', na_values="?")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 48842 entries, 0 to 48841
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 48842 non-null int64
1 workclass 46043 non-null object
2 fnlwgt 48842 non-null int64
3 education 48842 non-null object
4 educational-num 48842 non-null int64
5 marital-status 48842 non-null object
6 occupation 46033 non-null object
7 relationship 48842 non-null object
8 race 48842 non-null object
9 gender 48842 non-null object
10 capital-gain 48842 non-null int64
11 capital-loss 48842 non-null int64
12 hours-per-week 48842 non-null int64
13 native-country 47985 non-null object
14 income 48842 non-null object
dtypes: int64(6), object(9)
memory usage: 5.6+ MB
###Markdown
Data Cleaning part: Scaled the numerical column, and Label encoding the binary categorical column**** The target column `income` need to be encoded as `1` and `0`* As well as the `gender` column
###Code
import seaborn as sns
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
num_col = df.select_dtypes(exclude=['object', 'datetime']).columns
df[num_col] = scaler.fit_transform(df[num_col])
le = LabelEncoder()
df['gender'] = le.fit_transform(df['gender'])
df['income'] = le.fit_transform(df['income'])
df
###Output
_____no_output_____
###Markdown
Data Cleaning Part: Impute the missing value***The missing value are all fall into categorical features
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Impute the missing value with the most frequent value
###Code
df = df.fillna(df.mode().iloc[0])
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Data Cleaning Part: Convert the `object` Data type into `category` Data type***LightGBM can handle the `category` feature by itself, but before that, we need to convert the `object` dtype to `category` dtype, so that LightGBM can handle it.
###Code
for column in df.columns:
if df[column].dtype == 'object':
df[column] = df[column].astype('category')
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 48842 entries, 0 to 48841
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 48842 non-null float64
1 workclass 48842 non-null category
2 fnlwgt 48842 non-null float64
3 education 48842 non-null category
4 educational-num 48842 non-null float64
5 marital-status 48842 non-null category
6 occupation 48842 non-null category
7 relationship 48842 non-null category
8 race 48842 non-null category
9 gender 48842 non-null int64
10 capital-gain 48842 non-null float64
11 capital-loss 48842 non-null float64
12 hours-per-week 48842 non-null float64
13 native-country 48842 non-null category
14 income 48842 non-null int64
dtypes: category(7), float64(6), int64(2)
memory usage: 3.3 MB
###Markdown
Modeling Part***
###Code
X = df.drop('income', axis=1)
y = df['income']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
clf = lgb.LGBMClassifier(objective='binary', silent=False, colsample_bytree=0.9, subsample=0.9, learning_rate=0.05)
fit_params = {
'early_stopping_rounds': 10,
'eval_metric': 'accuracy',
'eval_set': [(X_test, y_test)],
'eval_names': ['valid'],
'verbose': 100,
'feature_name': 'auto', # actually this is default
'categorical_feature': 'auto' # actually this is default
}
clf.fit(X_train, y_train, **fit_params)
print(f"The Model Accuracy: {(clf.score(X_test, y_test)*100):.2f}%")
###Output
The Model Accuracy: 87.53%
###Markdown
Accuracy imporvement***Compare to last blog, the best performing model: `GradientBoostingClassifier` have achieved around `86%` of Accuracy, here using LightGBM, without One-Hot Encoding the categorical feature, it have around `1%` of Accuracy improving.
###Code
%matplotlib inline
feat_imp = pd.Series(clf.feature_importances_, index=X.columns)
feat_imp.nlargest(30).plot(kind='barh', figsize=(8,10))
###Output
_____no_output_____ |
Section-03-Variable-Characteristics/03.6-Outliers.ipynb | ###Markdown
OutliersAn outlier is a data point which is significantly different from the remaining data. "An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism." [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980.] Should outliers be removed?Depending on the context, outliers either deserve special attention or should be completely ignored. Take the example of revenue forecasting: if unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. In the same way, an unusual transaction on a credit card is usually a sign of fraudulent activity, which is what the credit card issuer wants to prevent. So in instances like these, it is useful to look for and investigate further outlier values.If outliers are however, introduced due to mechanical error, measurement error or anything else that can't be generalised, it is a good idea to remove these outliers before feeding the data to the modeling algorithm. Why? Because some algorithms are sensitive to outliers. Which machine learning models are sensitive to outliers?Some machine learning models are more sensitive to outliers than others. For instance, AdaBoost may treat outliers as "hard" cases and put tremendous weights on outliers, therefore producing a model with bad generalisation.Linear models, in particular Linear Regression, can be also sensitive to outliers.Decision trees tend to ignore the presence of outliers when creating the branches of their trees. Typically, trees make decisions by asking if variable x >= a certain value, and therefore the outlier will fall on each side of the branch, but it will be treated equally than the remaining values, regardless of its magnitude.A recent research article suggests that Neural Networks could also be sensitive to outliers, provided the number of outliers is high and the deviation is also high. I would argue that if the number of outliers is high (>15% as suggested in the article), then they are no longer outliers, and rather a fair representation of that variable. A link to this article can be found in the "Additional reading resources" lecture within this section of the course. How can outliers be identified?Outlier analysis and anomaly detection are a huge field of research devoted to optimise methods and create new algorithms to reliably identify outliers. There are a huge number of ways optimised to detect outliers in different situations. These are mostly targeted to identify outliers when those are the observations that we indeed want to focus on, for example for fraudulent credit card activity.In this course, however, I will focus on identifying those outliers introduced by mechanical or measurement error. Those outliers that are indeed a rare case in the population, and that could be ignored. I will show how to identify those outliers, so that in later sections of the course, we can learn how to pre-process them before using the variable to train machine learning algorithms. Extreme Value AnalysisThe most basic form of outlier detection is **Extreme Value Analysis** of 1-dimensional data. The key for this method is to determine the statistical tails of the underlying distribution of the variable, and then find the values that sit at the very end of the tails.If the the variable is Normally distributed (Gaussian), then the values that lie outside the mean plus or minus 3 times the standard deviation of the variable are considered outliers.- outliers = mean +/- 3* stdIf the variable is skewed distributed, a general approach is to calculate the quantiles, and then the inter-quantile range (IQR), as follows:- IQR = 75th quantile - 25th quantileAn outlier will sit outside the following upper and lower boundaries:- Upper boundary = 75th quantile + (IQR * 1.5)- Lower boundary = 25th quantile - (IQR * 1.5)or for extreme cases:- Upper boundary = 75th quantile + (IQR * 3)- Lower boundary = 25th quantile - (IQR * 3) Datasets for this notebook: In this demo, we will use the House Prices and Titanic datasets.- To download the datasets please refer to the lecture **Datasets** in **Section 1** of the course.We will also use a dataset included in Scikit-learn: Boston house prices dataset
###Code
# print information for boston dataset
from sklearn.datasets import load_boston
print(load_boston().DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
In this demoWe will:- Identify outliers using complete case analysis in Normally distributed variables.- Identify outliers using complete case analysis in skewed variables.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
boston.head()
# load the titanic dataset
titanic = pd.read_csv('../titanic.csv',
usecols=['age', 'fare'])
# The variables age and fare have missing values,
# I will remove them for this demo
titanic.dropna(subset=['age', 'fare'], inplace=True)
titanic.head()
###Output
_____no_output_____
###Markdown
Identify variable distributionIn Normally distributed variables, outliers are those values that lie beyond the mean plus or minus 3 times the standard deviation. If the variables are skewed however, we find outliers using the inter-quantile range. In order to decide which method to utilise to detect outliers, we first need to know the distribution of the variable.We can use histograms and Q-Q plots to determine if the variable is normally distributed. We can also use boxplots to directly visualise the outliers. Boxplots are a standard way of displaying the distribution of a variable utilising the first quartile, the median, the third quartile and the whiskers.Looking at a boxplot, you can easily identify:- The median, indicated by the line within the box.- The inter-quantile range (IQR), the box itself.- The quantiles, 25th (Q1) is the lower and 75th (Q3) the upper end of the box.- The wiskers, which extend to: -- top whisker: Q3 + 1.5 x IQR -- bottom whisker: Q1 -1.5 x IQRAny value sitting outside the whiskers is considered an outlier. Let's look at the examples below.
###Code
# function to create histogram, Q-Q plot and
# boxplot
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.histplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('RM quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
###Output
_____no_output_____
###Markdown
Normally distributed variables
###Code
# let's start with the variable RM from the
# boston house dataset.
# RM is the average number of rooms per dwelling
diagnostic_plots(boston, 'RM')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable rm approximates a Gaussian distribution quite well. In the boxplot, we see that the variable could have outliers, as there are many dots sitting outside the whiskers, at both tails of the distribution.
###Code
# let's inspect now the variable Age from the titanic
# refers to the age of the passengers on board
diagnostic_plots(titanic, 'age')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable approximates fairly well a Gaussian distribution. There is a deviation from the distribution towards the smaller values of age. In the boxplot, we can see that the variable could have outliers, as there are many dots sitting outside the whiskers, at the right end of the distribution (top whisker in the boxplot). Skewed variables
###Code
# variable LSTAT from the boston house dataset
# LSTAT is the % lower status of the population
diagnostic_plots(boston, 'LSTAT')
###Output
_____no_output_____
###Markdown
LSTAT is not normally distributed, it is skewed with a tail to the right. According to the boxplot, there are some outliers at the right end of the distribution of the variable.
###Code
# variable CRIM from the boston house dataset
# CRIM is the per capita crime rate by town
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
CRIM is heavily skewed, with a tail to the right. There seems to be quite a few outliers as well at the right end of the distribution, according to the boxplot.
###Code
# variable Fare from the titanic dataset
# Fare is the price paid for the ticket by
# the passengers
diagnostic_plots(titanic, 'fare')
###Output
_____no_output_____
###Markdown
Fare is also very skewed, and shows some unusual values at the right end of its distribution.In the next cells We will identify outliers using the mean and the standard deviation for the variables RM and Age from the boston and titanic datasets, respectively. Then we will use the inter-quantile range to identify outliers for the variables LSTAT, CRIM and Fare from the boston and titanic datasets. Outlier detection for Normally distributed variables
###Code
# function to find upper and lower boundaries
# for normally distributed variables
def find_normal_boundaries(df, variable):
# calculate the boundaries outside which sit the outliers
# for a Gaussian distribution
upper_boundary = df[variable].mean() + 3 * df[variable].std()
lower_boundary = df[variable].mean() - 3 * df[variable].std()
return upper_boundary, lower_boundary
# calculate boundaries for RM
upper_boundary, lower_boundary = find_normal_boundaries(boston, 'RM')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
From the above we conclude that values bigger than 8.4 or smaller than 4.2 occur very rarely for the variable RM. Therefore, we can consider them outliers.
###Code
# inspect the number and percentage of outliers for RM
print('total number of houses: {}'.format(len(boston)))
print('houses with more than 8.4 rooms (right end outliers): {}'.format(
len(boston[boston['RM'] > upper_boundary])))
print('houses with less than 4.2 rooms (left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary])))
print()
print('% right end outliers: {}'.format(
len(boston[boston['RM'] > upper_boundary]) / len(boston)))
print('% left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary]) / len(boston)))
###Output
total number of houses: 506
houses with more than 8.4 rooms (right end outliers): 4
houses with less than 4.2 rooms (left end outliers: 4
% right end outliers: 0.007905138339920948
% left end outliers: 0.007905138339920948
###Markdown
Using Extreme Value Analysis we identified outliers at both ends of the distribution of RM. The percentage of outliers is small (1.4% considering the 2 tails together), which makes sense, because we are finding precisely outliers. That is, rare values, rare occurrences.Let's move on to Age in the titanic dataset.
###Code
# calculate boundaries for Age in the titanic
upper_boundary, lower_boundary = find_normal_boundaries(titanic, 'age')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
The upper boundary is 73 years, which means that passengers older than 73 were very few, if any, in the titanic. The lower boundary is negative. Because negative age does not exist, it only makes sense to look for outliers utilising the upper boundary.
###Code
# lets look at the number and percentage of outliers
print('total passengers: {}'.format(len(titanic)))
print('passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary])))
print()
print('% of passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary]) / len(titanic)))
###Output
total passengers: 1045
passengers older than 73: 3
% of passengers older than 73: 0.0028708133971291866
###Markdown
There were 2 passengers older than 73 on board of the titanic, which could be considered outliers, as the majority of the population where much younger. Outlier detection for skewed variables
###Code
# function to find upper and lower boundaries
# for skewed distributed variables
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries outside which sit the outliers
# for skewed distributions
# distance passed as an argument, gives us the option to
# estimate 1.5 times or 3 times the IQR to calculate
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# looking for outliers,
# using the interquantile proximity rule
# IQR * 1.5, the standard metric
# for LSTAT in the boston house dataset
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'LSTAT', 1.5)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for LSTAT
print('total houses: {}'.format(len(boston)))
print('houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])))
print()
print('% houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])/len(boston)))
###Output
total houses: 506
houses with LSTAT bigger than 32: 7
% houses with LSTAT bigger than 32: 0.01383399209486166
###Markdown
The upper boundary shows a value of ~32. The lower boundary is negative, however the variable LSTAT does not take negative values. So to calculate the outliers for LSTAT we only use the upper boundary. This coincides with what we observed in the boxplot earlier in the notebook. Outliers sit only at the right tail of LSTAT's distribution.We observe 7 houses, 1.3 % of the dataset, with extremely high values for LSTAT.
###Code
# looking for outliers,
# using the interquantile proximity rule
# IQR * 3, now I am looking for extremely high values
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'CRIM', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for CRIM
print('total houses: {}'.format(len(boston)))
print('houses with CRIM bigger than 14: {}'.format(
len(boston[boston['CRIM'] > upper_boundary])))
print()
print('% houses with CRIM bigger than 14s: {}'.format(
len(boston[boston['CRIM'] > upper_boundary]) / len(boston)))
###Output
total houses: 506
houses with CRIM bigger than 14: 30
% houses with CRIM bigger than 14s: 0.05928853754940711
###Markdown
When using the 3 times inter-quantile range to find outliers, we find that ~6% of the houses show unusually high crime rate areas. For CRIM as well, the lower boundary is negative, so it only makes sense to use the upper boundary to calculate outliers, as the variable takes only positive values. This coincides with what we observed in CRIM's boxplot earlier in this notebook.
###Code
# finally, identify outliers in Fare in the
# titanic dataset. I will look again for extreme values
# using IQR * 3
upper_boundary, lower_boundary = find_skewed_boundaries(titanic, 'fare', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of passengers
# who paid extremely high Fares
print('total passengers: {}'.format(len(titanic)))
print('passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])))
print()
print('% passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])/len(titanic)))
###Output
total passengers: 1045
passengers who paid more than 117: 67
% passengers who paid more than 117: 0.06411483253588517
###Markdown
OutliersAn outlier is a data point that is significantly different from the remaining data."An outlier is an observation that deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism." [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980.] Should outliers be removed?Depending on the context, outliers either deserve special attention or should be ignored. Take the example of revenue forecasting: if unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. In the same way, an unusual transaction on a credit card might be a sign of fraudulent activity, which is what the credit card issuer wants to prevent. So, in instances like these, it is useful to look for and investigate further the outlier values.If outliers are, however, introduced by mechanical or measurement error, it is a good idea to remove these outliers before training the model. Why? because some algorithms are sensitive to outliers. Machine learning models and outliersSome machine learning models are sensitive to outliers. For instance, AdaBoost may treat outliers as "hard" cases and put tremendous weights on them, thus producing a model with poor generalisation.Linear models, in particular linear regression, can also be sensitive to outliers.Decision trees-based models are robust to outliers. Decision trees make decisions by asking if variable x is >= than a certain value, and therefore the outlier will fall on each side of the equation, but it will be treated similarly to non-outlier values.A research article suggests that neural networks could also be sensitive to outliers, provided the number of outliers is high and the deviation is also high. I would argue that if the number of outliers is high (> 15% as suggested in the article), then they are no longer outliers, but rather a fair representation of that variable. A link to this article can be found in the "Additional reading resources" lecture at the end of this section of the course. Identifying outliersOutlier analysis and anomaly detection is a huge field of research devoted to optimising methods and creating new algorithms to reliably identify outliers. There are plenty of ways to detect outliers in different situations. These are mostly targeted to identify outliers when those are the observations that we want to focus on, for example, fraudulent credit card activity.In this course, however, we will focus on identifying outliers introduced by mechanical or measurement error. For this course, we won't be interested in the outliers per se, we just want to treat them before training our models.Here, I will show you how to identify outliers. In later sections of the course we will learn how to preprocess them before training machine learning models. Extreme Value AnalysisThe most basic form of outlier detection is **Extreme Value Analysis** of 1-dimensional data. The key to this method is to determine the statistical tails of the underlying distribution of the variable and then find the values that sit at the very end of the distribution.If the variable is normally distributed (Gaussian), then the values that lie outside the mean, plus or minus 3 times the standard deviation of the variable, are considered outliers.- outliers = mean +/- 3* std.If the variable is skewed, a general approach is to calculate the quantiles, and then the inter-quartile range (IQR):- IQR = 75th quantile - 25th quantileAn outlier will sit outside the following upper and lower boundaries:- Upper boundary = 75th quantile + (IQR * 1.5)- Lower boundary = 25th quantile - (IQR * 1.5)or for extreme cases:- Upper boundary = 75th quantile + (IQR * 3)- Lower boundary = 25th quantile - (IQR * 3) Datasets for this demo: We will use the House Prices and Titanic datasets.- To download the datasets please refer to the lecture **Datasets** in **Section 1** of the course.We will also use the Boston house prices dataset from Scikit-learn: Boston house prices dataset
###Code
# Print information for boston dataset.
from sklearn.datasets import load_boston
print(load_boston().DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of black people by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
In this demoWe will:- Identify outliers using Extreme Value Analysis in Normally distributed variables.- Identify outliers using Extreme Value Analysis in skewed variables.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
# Load the Boston House prices dataset from sklearn
boston_dataset = load_boston()
# Create a dataframe with the independent variables.
# I will use only 3 of the total variables for this demo.
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
boston.head()
# Load the titanic dataset.
titanic = pd.read_csv('../titanic.csv',
usecols=['age', 'fare'])
# The variables age and fare have missing values.
# I will remove them for this demo.
titanic.dropna(subset=['age', 'fare'], inplace=True)
titanic.head()
###Output
_____no_output_____
###Markdown
Variable distributionIn normally distributed variables, outliers are those values that lie beyond the mean, plus or minus 3 times the standard deviation. If the variables are skewed, however, we find outliers using the inter-quantile range. In order to decide which method to use to detect outliers, we first need to know the distribution of the variable.We can use histograms and Q-Q plots to determine if the variable is normally distributed. We can also use boxplots to directly visualise the outliers. Boxplots are a standard way of displaying the distribution of a variable, utilising the first quartile, the median, the third quartile, and the whiskers.Looking at a boxplot, you can easily identify:- The median, indicated by the line within the box.- The inter-quantile range (IQR), the box itself.- The quantiles, the 25th (Q1) is the lower and the 75th (Q3) the upper end of the box.- The wiskers, which extend to: -- top whisker: Q3 + 1.5 x IQR -- bottom whisker: Q1 -1.5 x IQRAny value sitting outside the whiskers is considered an outlier. Let's look at some examples below.
###Code
# Function to create a histogram, a Q-Q plot and
# a boxplot.
def diagnostic_plots(df, variable):
# The function takes a dataframe (df) and
# the variable of interest as arguments.
# Define figure size.
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.histplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('RM quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
###Output
_____no_output_____
###Markdown
Normally distributed variables
###Code
# Let's begin with the variable RM from the
# boston house dataset.
# RM is the average number of rooms per dwelling.
diagnostic_plots(boston, 'RM')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable RM shows roughly a Gaussian distribution. In the boxplot, we see some outliers, that is, the dots outside of the whiskers at both sides of the distribution.
###Code
# Let's inspect the variable Age from the Titanic.
diagnostic_plots(titanic, 'age')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable approximates a Gaussian distribution. There is a deviation from the distribution towards the smaller values of age. In the boxplot, we can see some outliers, the dots outside of the whiskers at the top. Skewed variables
###Code
# Variable LSTAT from the boston house prices dataset.
# LSTAT is the % lower status of the population.
diagnostic_plots(boston, 'LSTAT')
###Output
_____no_output_____
###Markdown
LSTAT is not normally distributed, it is skewed with a tail to the right. According to the boxplot, there are some outliers at the right end of the distribution.
###Code
# Variable CRIM from the boston house prices dataset.
# CRIM is the per capita crime rate by town.
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
CRIM is heavily skewed, with a tail to the right. According to the boxplot, there are a few outliers at the right end of the distribution.
###Code
# Variable Fare from the Titanic dataset.
# Fare is the ticket price.
diagnostic_plots(titanic, 'fare')
###Output
_____no_output_____
###Markdown
Fare is also highly skewed, and shows some unusual values at the right end of its distribution.In the next cells, we will identify outliers using the mean and the standard deviation for the variables RM and Age from the boston and titanic datasets, respectively. Then we will use the inter-quantile range to identify outliers for the variables LSTAT, CRIM and Fare from the boston and titanic datasets. Outlier detection Normally distributed variables
###Code
# Function to find upper and lower boundaries
# for normally distributed variables.
def find_normal_boundaries(df, variable):
# Calculate the boundaries
# for a Gaussian distribution.
upper_boundary = df[variable].mean() + 3 * df[variable].std()
lower_boundary = df[variable].mean() - 3 * df[variable].std()
return upper_boundary, lower_boundary
# calculate boundaries for RM
upper_boundary, lower_boundary = find_normal_boundaries(boston, 'RM')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
Values bigger than 8.4 or smaller than 4.2 occur very rarely in RM. Therefore, we can consider them outliers.
###Code
# Inspect the number and percentage of outliers in RM.
print('total number of houses: {}'.format(len(boston)))
print('houses with more than 8.4 rooms (right end outliers): {}'.format(
len(boston[boston['RM'] > upper_boundary])))
print('houses with less than 4.2 rooms (left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary])))
print()
print('% right end outliers: {}'.format(
len(boston[boston['RM'] > upper_boundary]) / len(boston)))
print('% left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary]) / len(boston)))
###Output
total number of houses: 506
houses with more than 8.4 rooms (right end outliers): 4
houses with less than 4.2 rooms (left end outliers: 4
% right end outliers: 0.007905138339920948
% left end outliers: 0.007905138339920948
###Markdown
Using Extreme Value Analysis we identified outliers at both ends of the distribution of RM. The percentage of outliers is small (1.4% considering the 2 tails together).Let's move on to Age from the titanic dataset.
###Code
# Calculate boundaries for Age.
upper_boundary, lower_boundary = find_normal_boundaries(titanic, 'age')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
The upper boundary is 73 years, which means that passengers older than 73 were rare in the titanic. The lower boundary is negative. Because negative age does not exist, it only makes sense to look for outliers using the upper boundary.
###Code
# Let's look at the number and percentage of outliers.
print('total passengers: {}'.format(len(titanic)))
print('passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary])))
print()
print('% of passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary]) / len(titanic)))
###Output
total passengers: 1045
passengers older than 73: 3
% of passengers older than 73: 0.0028708133971291866
###Markdown
There were 2 passengers older than 73 on the titanic. Skewed variables
###Code
# Function to find upper and lower boundaries
# for skewed variables.
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries
# for skewed distributions
# The parameter "distance" gives us the option to
# estimate 1.5 times or 3 times the IQR when defining
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# Find outliers with the IQR proximity rule.
# Here we use, IQR * 1.5, the standard metric.
# For LSTAT in the boston house prices dataset.
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'LSTAT', 1.5)
upper_boundary, lower_boundary
# Let's look at the number and percentage of outliers
# for LSTAT.
print('total houses: {}'.format(len(boston)))
print('houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])))
print()
print('% houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])/len(boston)))
###Output
total houses: 506
houses with LSTAT bigger than 32: 7
% houses with LSTAT bigger than 32: 0.01383399209486166
###Markdown
The upper boundary is ~32. The lower boundary is negative. However, the variable LSTAT does not take negative values. Thus, outliers in LSTAT will only be found beyond the upper boundary. This coincides with what we observed in the boxplot earlier in the notebook. Outliers were only found at the right tail of LSTAT's distribution.We observe 7 houses, 1.3 % of the dataset, with extremely high values for LSTAT.
###Code
# Find outliers with the IQR.
# Here, we use: IQR * 3, I look for extremely high values.
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'CRIM', 3)
upper_boundary, lower_boundary
# Let's look at the number and percentage of outliers
# for CRIM.
print('total houses: {}'.format(len(boston)))
print('houses with CRIM bigger than 14: {}'.format(
len(boston[boston['CRIM'] > upper_boundary])))
print()
print('% houses with CRIM bigger than 14s: {}'.format(
len(boston[boston['CRIM'] > upper_boundary]) / len(boston)))
###Output
total houses: 506
houses with CRIM bigger than 14: 30
% houses with CRIM bigger than 14s: 0.05928853754940711
###Markdown
When using the 3 times the inter-quartile range to find outliers, we find that ~6% of the houses show unusually high crime rate areas. For CRIM as well, the lower boundary is negative, so it only makes sense to use the upper boundary to find outliers, as the variable takes only positive values. This coincides with what we observed in CRIM's boxplot earlier in this notebook.
###Code
# Finally, identify outliers in Fare.
# I will look again for extreme values
# using IQR * 3.
upper_boundary, lower_boundary = find_skewed_boundaries(titanic, 'fare', 3)
upper_boundary, lower_boundary
# Let's look at the number and percentage of passengers
# who paid extremely high Fares.
print('total passengers: {}'.format(len(titanic)))
print('passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])))
print()
print('% passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])/len(titanic)))
###Output
total passengers: 1045
passengers who paid more than 117: 67
% passengers who paid more than 117: 0.06411483253588517
###Markdown
OutliersAn outlier is a data point which is significantly different from the remaining data. "An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism." [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980.] Should outliers be removed?Depending on the context, outliers either deserve special attention or should be completely ignored. Take the example of revenue forecasting: if unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. In the same way, an unusual transaction on a credit card is usually a sign of fraudulent activity, which is what the credit card issuer wants to prevent. So in instances like these, it is useful to look for and investigate further outlier values.If outliers are however, introduced due to mechanical error, measurement error or anything else that can't be generalised, it is a good idea to remove these outliers before feeding the data to the modeling algorithm. Why? Because some algorithms are sensitive to outliers. Which machine learning models are sensitive to outliers?Some machine learning models are more sensitive to outliers than others. For instance, AdaBoost may treat outliers as "hard" cases and put tremendous weights on outliers, therefore producing a model with bad generalisation.Linear models, in particular Linear Regression, can be also sensitive to outliers.Decision trees tend to ignore the presence of outliers when creating the branches of their trees. Typically, trees make decisions by asking if variable x >= a certain value, and therefore the outlier will fall on each side of the branch, but it will be treated equally than the remaining values, regardless of its magnitude.A recent research article suggests that Neural Networks could also be sensitive to outliers, provided the number of outliers is high and the deviation is also high. I would argue that if the number of outliers is high (>15% as suggested in the article), then they are no longer outliers, and rather a fair representation of that variable. A link to this article can be found in the "Additional reading resources" lecture within this section of the course. How can outliers be identified?Outlier analysis and anomaly detection are a huge field of research devoted to optimise methods and create new algorithms to reliably identify outliers. There are a huge number of ways optimised to detect outliers in different situations. These are mostly targeted to identify outliers when those are the observations that we indeed want to focus on, for example for fraudulent credit card activity.In this course, however, I will focus on identifying those outliers introduced by mechanical or measurement error. Those outliers that are indeed a rare case in the population, and that could be ignored. I will show how to identify those outliers, so that in later sections of the course, we can learn how to pre-process them before using the variable to train machine learning algorithms. Extreme Value AnalysisThe most basic form of outlier detection is **Extreme Value Analysis** of 1-dimensional data. The key for this method is to determine the statistical tails of the underlying distribution of the variable, and then find the values that sit at the very end of the tails.If the the variable is Normally distributed (Gaussian), then the values that lie outside the mean plus or minus 3 times the standard deviation of the variable are considered outliers.- outliers = mean +/- 3* stdIf the variable is skewed distributed, a general approach is to calculate the quantiles, and then the inter-quantile range (IQR), as follows:- IQR = 75th quantile - 25th quantileAn outlier will sit outside the following upper and lower boundaries:- Upper boundary = 75th quantile + (IQR * 1.5)- Lower boundary = 25th quantile - (IQR * 1.5)or for extreme cases:- Upper boundary = 75th quantile + (IQR * 3)- Lower boundary = 25th quantile - (IQR * 3) Datasets for this notebook: In this demo, we will use the House Prices and Titanic datasets.- To download the datasets please refer to the lecture **Datasets** in **Section 1** of the course.We will also use a dataset included in Scikit-learn: Boston house prices dataset
###Code
# print information for boston dataset
from sklearn.datasets import load_boston
print(load_boston().DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
In this demoWe will:- Identify outliers using complete case analysis in Normally distributed variables.- Identify outliers using complete case analysis in skewed variables.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
boston.head()
# load the titanic dataset
titanic = pd.read_csv('../titanic.csv',
usecols=['age', 'fare'])
# The variables age and fare have missing values,
# I will remove them for this demo
titanic.dropna(subset=['age', 'fare'], inplace=True)
titanic.head()
###Output
_____no_output_____
###Markdown
Identify variable distributionIn Normally distributed variables, outliers are those values that lie beyond the mean plus or minus 3 times the standard deviation. If the variables are skewed however, we find outliers using the inter-quantile range. In order to decide which method to utilise to detect outliers, we first need to know the distribution of the variable.We can use histograms and Q-Q plots to determine if the variable is normally distributed. We can also use boxplots to directly visualise the outliers. Boxplots are a standard way of displaying the distribution of a variable utilising the first quartile, the median, the third quartile and the whiskers.Looking at a boxplot, you can easily identify:- The median, indicated by the line within the box.- The inter-quantile range (IQR), the box itself.- The quantiles, 25th (Q1) is the lower and 75th (Q3) the upper end of the box.- The wiskers, which extend to: -- top whisker: Q3 + 1.5 x IQR -- bottom whisker: Q1 -1.5 x IQRAny value sitting outside the whiskers is considered an outlier. Let's look at the examples below.
###Code
# function to create histogram, Q-Q plot and
# boxplot
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.histplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('RM quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
###Output
_____no_output_____
###Markdown
Normally distributed variables
###Code
# let's start with the variable RM from the
# boston house dataset.
# RM is the average number of rooms per dwelling
diagnostic_plots(boston, 'RM')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable rm approximates a Gaussian distribution quite well. In the boxplot, we see that the variable could have outliers, as there are many dots sitting outside the whiskers, at both tails of the distribution.
###Code
# let's inspect now the variable Age from the titanic
# refers to the age of the passengers on board
diagnostic_plots(titanic, 'age')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable approximates fairly well a Gaussian distribution. There is a deviation from the distribution towards the smaller values of age. In the boxplot, we can see that the variable could have outliers, as there are many dots sitting outside the whiskers, at the right end of the distribution (top whisker in the boxplot). Skewed variables
###Code
# variable LSTAT from the boston house dataset
# LSTAT is the % lower status of the population
diagnostic_plots(boston, 'LSTAT')
###Output
_____no_output_____
###Markdown
LSTAT is not normally distributed, it is skewed with a tail to the right. According to the boxplot, there are some outliers at the right end of the distribution of the variable.
###Code
# variable CRIM from the boston house dataset
# CRIM is the per capita crime rate by town
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
CRIM is heavily skewed, with a tail to the right. There seems to be quite a few outliers as well at the right end of the distribution, according to the boxplot.
###Code
# variable Fare from the titanic dataset
# Fare is the price paid for the ticket by
# the passengers
diagnostic_plots(titanic, 'fare')
###Output
_____no_output_____
###Markdown
Fare is also very skewed, and shows some unusual values at the right end of its distribution.In the next cells We will identify outliers using the mean and the standard deviation for the variables RM and Age from the boston and titanic datasets, respectively. Then we will use the inter-quantile range to identify outliers for the variables LSTAT, CRIM and Fare from the boston and titanic datasets. Outlier detection for Normally distributed variables
###Code
# function to find upper and lower boundaries
# for normally distributed variables
def find_normal_boundaries(df, variable):
# calculate the boundaries outside which sit the outliers
# for a Gaussian distribution
upper_boundary = df[variable].mean() + 3 * df[variable].std()
lower_boundary = df[variable].mean() - 3 * df[variable].std()
return upper_boundary, lower_boundary
# calculate boundaries for RM
upper_boundary, lower_boundary = find_normal_boundaries(boston, 'RM')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
From the above we conclude that values bigger than 8.4 or smaller than 4.2 occur very rarely for the variable RM. Therefore, we can consider them outliers.
###Code
# inspect the number and percentage of outliers for RM
print('total number of houses: {}'.format(len(boston)))
print('houses with more than 8.4 rooms (right end outliers): {}'.format(
len(boston[boston['RM'] > upper_boundary])))
print('houses with less than 4.2 rooms (left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary])))
print()
print('% right end outliers: {}'.format(
len(boston[boston['RM'] > upper_boundary]) / len(boston)))
print('% left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary]) / len(boston)))
###Output
total number of houses: 506
houses with more than 8.4 rooms (right end outliers): 4
houses with less than 4.2 rooms (left end outliers: 4
% right end outliers: 0.007905138339920948
% left end outliers: 0.007905138339920948
###Markdown
Using Extreme Value Analysis we identified outliers at both ends of the distribution of RM. The percentage of outliers is small (1.4% considering the 2 tails together), which makes sense, because we are finding precisely outliers. That is, rare values, rare occurrences.Let's move on to Age in the titanic dataset.
###Code
# calculate boundaries for Age in the titanic
upper_boundary, lower_boundary = find_normal_boundaries(titanic, 'age')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
The upper boundary is 73 years, which means that passengers older than 73 were very few, if any, in the titanic. The lower boundary is negative. Because negative age does not exist, it only makes sense to look for outliers utilising the upper boundary.
###Code
# lets look at the number and percentage of outliers
print('total passengers: {}'.format(len(titanic)))
print('passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary])))
print()
print('% of passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary]) / len(titanic)))
###Output
total passengers: 1045
passengers older than 73: 3
% of passengers older than 73: 0.0028708133971291866
###Markdown
There were 2 passengers older than 73 on board of the titanic, which could be considered outliers, as the majority of the population where much younger. Outlier detection for skewed variables
###Code
# function to find upper and lower boundaries
# for skewed distributed variables
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries outside which sit the outliers
# for skewed distributions
# distance passed as an argument, gives us the option to
# estimate 1.5 times or 3 times the IQR to calculate
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# looking for outliers,
# using the interquantile proximity rule
# IQR * 1.5, the standard metric
# for LSTAT in the boston house dataset
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'LSTAT', 1.5)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for LSTAT
print('total houses: {}'.format(len(boston)))
print('houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])))
print()
print('% houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])/len(boston)))
###Output
total houses: 506
houses with LSTAT bigger than 32: 7
% houses with LSTAT bigger than 32: 0.01383399209486166
###Markdown
The upper boundary shows a value of ~32. The lower boundary is negative, however the variable LSTAT does not take negative values. So to calculate the outliers for LSTAT we only use the upper boundary. This coincides with what we observed in the boxplot earlier in the notebook. Outliers sit only at the right tail of LSTAT's distribution.We observe 7 houses, 1.3 % of the dataset, with extremely high values for LSTAT.
###Code
# looking for outliers,
# using the interquantile proximity rule
# IQR * 3, now I am looking for extremely high values
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'CRIM', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for CRIM
print('total houses: {}'.format(len(boston)))
print('houses with CRIM bigger than 14: {}'.format(
len(boston[boston['CRIM'] > upper_boundary])))
print()
print('% houses with CRIM bigger than 14s: {}'.format(
len(boston[boston['CRIM'] > upper_boundary]) / len(boston)))
###Output
total houses: 506
houses with CRIM bigger than 14: 30
% houses with CRIM bigger than 14s: 0.05928853754940711
###Markdown
When using the 3 times inter-quantile range to find outliers, we find that ~6% of the houses show unusually high crime rate areas. For CRIM as well, the lower boundary is negative, so it only makes sense to use the upper boundary to calculate outliers, as the variable takes only positive values. This coincides with what we observed in CRIM's boxplot earlier in this notebook.
###Code
# finally, identify outliers in Fare in the
# titanic dataset. I will look again for extreme values
# using IQR * 3
upper_boundary, lower_boundary = find_skewed_boundaries(titanic, 'fare', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of passengers
# who paid extremely high Fares
print('total passengers: {}'.format(len(titanic)))
print('passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])))
print()
print('% passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])/len(titanic)))
###Output
total passengers: 1045
passengers who paid more than 117: 67
% passengers who paid more than 117: 0.06411483253588517
###Markdown
OutliersAn outlier is a data point which is significantly different from the remaining data. "An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism." [D. Hawkins. Identification of Outliers, Chapman and Hall , 1980.] Should outliers be removed?Depending on the context, outliers either deserve special attention or should be completely ignored. Take the example of revenue forecasting: if unusual spikes of revenue are observed, it's probably a good idea to pay extra attention to them and figure out what caused the spike. In the same way, an unusual transaction on a credit card is usually a sign of fraudulent activity, which is what the credit card issuer wants to prevent. So in instances like these, it is useful to look for and investigate further outlier values.If outliers are however, introduced due to mechanical error, measurement error or anything else that can't be generalised, it is a good idea to remove these outliers before feeding the data to the modeling algorithm. Why? Because some algorithms are sensitive to outliers. Which machine learning models are sensitive to outliers?Some machine learning models are more sensitive to outliers than others. For instance, AdaBoost may treat outliers as "hard" cases and put tremendous weights on outliers, therefore producing a model with bad generalisation.Linear models, in particular Linear Regression, can be also sensitive to outliers.Decision trees tend to ignore the presence of outliers when creating the branches of their trees. Typically, trees make decisions by asking if variable x >= a certain value, and therefore the outlier will fall on each side of the branch, but it will be treated equally than the remaining values, regardless of its magnitude.A recent research article suggests that Neural Networks could also be sensitive to outliers, provided the number of outliers is high and the deviation is also high. I would argue that if the number of outliers is high (>15% as suggested in the article), then they are no longer outliers, and rather a fair representation of that variable. A link to this article can be found in the "Additional reading resources" lecture within this section of the course. How can outliers be identified?Outlier analysis and anomaly detection are a huge field of research devoted to optimise methods and create new algorithms to reliably identify outliers. There are a huge number of ways optimised to detect outliers in different situations. These are mostly targeted to identify outliers when those are the observations that we indeed want to focus on, for example for fraudulent credit card activity.In this course, however, I will focus on identifying those outliers introduced by mechanical or measurement error. Those outliers that are indeed a rare case in the population, and that could be ignored. I will show how to identify those outliers, so that in later sections of the course, we can learn how to pre-process them before using the variable to train machine learning algorithms. Extreme Value AnalysisThe most basic form of outlier detection is **Extreme Value Analysis** of 1-dimensional data. The key for this method is to determine the statistical tails of the underlying distribution of the variable, and then find the values that sit at the very end of the tails.If the the variable is Normally distributed (Gaussian), then the values that lie outside the mean plus or minus 3 times the standard deviation of the variable are considered outliers.- outliers = mean +/- 3* stdIf the variable is skewed distributed, a general approach is to calculate the quantiles, and then the inter-quantile range (IQR), as follows:- IQR = 75th quantile - 25th quantileAn outlier will sit outside the following upper and lower boundaries:- Upper boundary = 75th quantile + (IQR * 1.5)- Lower boundary = 25th quantile - (IQR * 1.5)or for extreme cases:- Upper boundary = 75th quantile + (IQR * 3)- Lower boundary = 25th quantile - (IQR * 3) Datasets for this notebook: In this demo, we will use the House Prices and Titanic datasets.- To download the datasets please refer to the lecture **Datasets** in **Section 1** of the course.We will also use a dataset included in Scikit-learn: Boston house prices dataset
###Code
# print information for boston dataset
from sklearn.datasets import load_boston
print(load_boston().DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
In this demoWe will:- Identify outliers using complete case analysis in Normally distributed variables.- Identify outliers using complete case analysis in skewed variables.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# for Q-Q plots
import scipy.stats as stats
# boston house dataset for the demo
from sklearn.datasets import load_boston
# load the the Boston House price data
# load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
# I will use only 3 of the total variables for this demo
boston = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)[[
'RM', 'LSTAT', 'CRIM'
]]
boston.head()
# load the titanic dataset
titanic = pd.read_csv('../titanic.csv',
usecols=['age', 'fare'])
# The variables age and fare have missing values,
# I will remove them for this demo
titanic.dropna(subset=['age', 'fare'], inplace=True)
titanic.head()
###Output
_____no_output_____
###Markdown
Identify variable distributionIn Normally distributed variables, outliers are those values that lie beyond the mean plus or minus 3 times the standard deviation. If the variables are skewed however, we find outliers using the inter-quantile range. In order to decide which method to utilise to detect outliers, we first need to know the distribution of the variable.We can use histograms and Q-Q plots to determine if the variable is normally distributed. We can also use boxplots to directly visualise the outliers. Boxplots are a standard way of displaying the distribution of a variable utilising the first quartile, the median, the third quartile and the whiskers.Looking at a boxplot, you can easily identify:- The median, indicated by the line within the box.- The inter-quantile range (IQR), the box itself.- The quantiles, 25th (Q1) is the lower and 75th (Q3) the upper end of the box.- The wiskers, which extend to: -- top whisker: Q3 + 1.5 x IQR -- bottom whisker: Q1 -1.5 x IQRAny value sitting outside the whiskers is considered an outlier. Let's look at the examples below.
###Code
# function to create histogram, Q-Q plot and
# boxplot
def diagnostic_plots(df, variable):
# function takes a dataframe (df) and
# the variable of interest as arguments
# define figure size
plt.figure(figsize=(16, 4))
# histogram
plt.subplot(1, 3, 1)
sns.distplot(df[variable], bins=30)
plt.title('Histogram')
# Q-Q plot
plt.subplot(1, 3, 2)
stats.probplot(df[variable], dist="norm", plot=plt)
plt.ylabel('RM quantiles')
# boxplot
plt.subplot(1, 3, 3)
sns.boxplot(y=df[variable])
plt.title('Boxplot')
plt.show()
###Output
_____no_output_____
###Markdown
Normally distributed variables
###Code
# let's start with the variable RM from the
# boston house dataset.
# RM is the average number of rooms per dwelling
diagnostic_plots(boston, 'RM')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable rm approximates a Gaussian distribution quite well. In the boxplot, we see that the variable could have outliers, as there are many dots sitting outside the whiskers, at both tails of the distribution.
###Code
# let's inspect now the variable Age from the titanic
# refers to the age of the passengers on board
diagnostic_plots(titanic, 'age')
###Output
_____no_output_____
###Markdown
From the histogram and the Q-Q plot, we see that the variable approximates fairly well a Gaussian distribution. There is a deviation from the distribution towards the smaller values of age. In the boxplot, we can see that the variable could have outliers, as there are many dots sitting outside the whiskers, at the right end of the distribution (top whisker in the boxplot). Skewed variables
###Code
# variable LSTAT from the boston house dataset
# LSTAT is the % lower status of the population
diagnostic_plots(boston, 'LSTAT')
###Output
_____no_output_____
###Markdown
LSTAT is not normally distributed, it is skewed with a tail to the right. According to the boxplot, there are some outliers at the right end of the distribution of the variable.
###Code
# variable CRIM from the boston house dataset
# CRIM is the per capita crime rate by town
diagnostic_plots(boston, 'CRIM')
###Output
_____no_output_____
###Markdown
CRIM is heavily skewed, with a tail to the right. There seems to be quite a few outliers as well at the right end of the distribution, according to the boxplot.
###Code
# variable Fare from the titanic dataset
# Fare is the price paid for the ticket by
# the passengers
diagnostic_plots(titanic, 'fare')
###Output
_____no_output_____
###Markdown
Fare is also very skewed, and shows some unusual values at the right end of its distribution.In the next cells We will identify outliers using the mean and the standard deviation for the variables RM and Age from the boston and titanic datasets, respectively. Then we will use the inter-quantile range to identify outliers for the variables LSTAT, CRIM and Fare from the boston and titanic datasets. Outlier detection for Normally distributed variables
###Code
# function to find upper and lower boundaries
# for normally distributed variables
def find_normal_boundaries(df, variable):
# calculate the boundaries outside which sit the outliers
# for a Gaussian distribution
upper_boundary = df[variable].mean() + 3 * df[variable].std()
lower_boundary = df[variable].mean() - 3 * df[variable].std()
return upper_boundary, lower_boundary
# calculate boundaries for RM
upper_boundary, lower_boundary = find_normal_boundaries(boston, 'RM')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
From the above we conclude that values bigger than 8.4 or smaller than 4.2 occur very rarely for the variable RM. Therefore, we can consider them outliers.
###Code
# inspect the number and percentage of outliers for RM
print('total number of houses: {}'.format(len(boston)))
print('houses with more than 8.4 rooms (right end outliers): {}'.format(
len(boston[boston['RM'] > upper_boundary])))
print('houses with less than 4.2 rooms (left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary])))
print()
print('% right end outliers: {}'.format(
len(boston[boston['RM'] > upper_boundary]) / len(boston)))
print('% left end outliers: {}'.format(
len(boston[boston['RM'] < lower_boundary]) / len(boston)))
###Output
total number of houses: 506
houses with more than 8.4 rooms (right end outliers): 4
houses with less than 4.2 rooms (left end outliers: 4
% right end outliers: 0.007905138339920948
% left end outliers: 0.007905138339920948
###Markdown
Using Extreme Value Analysis we identified outliers at both ends of the distribution of RM. The percentage of outliers is small (1.4% considering the 2 tails together), which makes sense, because we are finding precisely outliers. That is, rare values, rare occurrences.Let's move on to Age in the titanic dataset.
###Code
# calculate boundaries for Age in the titanic
upper_boundary, lower_boundary = find_normal_boundaries(titanic, 'age')
upper_boundary, lower_boundary
###Output
_____no_output_____
###Markdown
The upper boundary is 73 years, which means that passengers older than 73 were very few, if any, in the titanic. The lower boundary is negative. Because negative age does not exist, it only makes sense to look for outliers utilising the upper boundary.
###Code
# lets look at the number and percentage of outliers
print('total passengers: {}'.format(len(titanic)))
print('passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary])))
print()
print('% of passengers older than 73: {}'.format(
len(titanic[titanic['age'] > upper_boundary]) / len(titanic)))
###Output
total passengers: 1045
passengers older than 73: 3
% of passengers older than 73: 0.0028708133971291866
###Markdown
There were 2 passengers older than 73 on board of the titanic, which could be considered outliers, as the majority of the population where much younger. Outlier detection for skewed variables
###Code
# function to find upper and lower boundaries
# for skewed distributed variables
def find_skewed_boundaries(df, variable, distance):
# Let's calculate the boundaries outside which sit the outliers
# for skewed distributions
# distance passed as an argument, gives us the option to
# estimate 1.5 times or 3 times the IQR to calculate
# the boundaries.
IQR = df[variable].quantile(0.75) - df[variable].quantile(0.25)
lower_boundary = df[variable].quantile(0.25) - (IQR * distance)
upper_boundary = df[variable].quantile(0.75) + (IQR * distance)
return upper_boundary, lower_boundary
# looking for outliers,
# using the interquantile proximity rule
# IQR * 1.5, the standard metric
# for LSTAT in the boston house dataset
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'LSTAT', 1.5)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for LSTAT
print('total houses: {}'.format(len(boston)))
print('houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])))
print()
print('% houses with LSTAT bigger than 32: {}'.format(
len(boston[boston['LSTAT'] > upper_boundary])/len(boston)))
###Output
total houses: 506
houses with LSTAT bigger than 32: 7
% houses with LSTAT bigger than 32: 0.01383399209486166
###Markdown
The upper boundary shows a value of ~32. The lower boundary is negative, however the variable LSTAT does not take negative values. So to calculate the outliers for LSTAT we only use the upper boundary. This coincides with what we observed in the boxplot earlier in the notebook. Outliers sit only at the right tail of LSTAT's distribution.We observe 7 houses, 1.3 % of the dataset, with extremely high values for LSTAT.
###Code
# looking for outliers,
# using the interquantile proximity rule
# IQR * 3, now I am looking for extremely high values
upper_boundary, lower_boundary = find_skewed_boundaries(boston, 'CRIM', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of outliers
# for CRIM
print('total houses: {}'.format(len(boston)))
print('houses with CRIM bigger than 14: {}'.format(
len(boston[boston['CRIM'] > upper_boundary])))
print()
print('% houses with CRIM bigger than 14s: {}'.format(
len(boston[boston['CRIM'] > upper_boundary]) / len(boston)))
###Output
total houses: 506
houses with CRIM bigger than 14: 30
% houses with CRIM bigger than 14s: 0.05928853754940711
###Markdown
When using the 3 times inter-quantile range to find outliers, we find that ~6% of the houses show unusually high crime rate areas. For CRIM as well, the lower boundary is negative, so it only makes sense to use the upper boundary to calculate outliers, as the variable takes only positive values. This coincides with what we observed in CRIM's boxplot earlier in this notebook.
###Code
# finally, identify outliers in Fare in the
# titanic dataset. I will look again for extreme values
# using IQR * 3
upper_boundary, lower_boundary = find_skewed_boundaries(titanic, 'fare', 3)
upper_boundary, lower_boundary
# lets look at the number and percentage of passengers
# who paid extremely high Fares
print('total passengers: {}'.format(len(titanic)))
print('passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])))
print()
print('% passengers who paid more than 117: {}'.format(
len(titanic[titanic['fare'] > upper_boundary])/len(titanic)))
###Output
total passengers: 1045
passengers who paid more than 117: 67
% passengers who paid more than 117: 0.06411483253588517
|
Data_Science_2/Full_Day/3-AzureMLStudio-Reference.ipynb | ###Markdown
Cloud-based machine learningThus far, we have looked at building and fitting ML models “locally.” True, the notebooks have been located in the cloud themselves, but the models with all of their predictive and classification power are stuck in those notebooks. To use these models, you would have to load data into your notebooks and get the results there.In practice, we want those models accessible from a number of locations. And while the management of production ML models has a lifecycle all its own, one part of that is making models accessible from the web. One way to do so is to develop them using third-party cloud tools, such as [Microsoft Azure ML Studio](https://studio.azureml.net) (not to be confused with Microsoft Azure Machine Learning sService, which provides end-to-end lifecycle management for ML models).Alternatively, we can develop and deploy a function that can be accessed by other programs over the web—a web service—that runs within Azure ML Studio, and we can do so entirely from a Python notebook. In this section, we will use the [`azureml`](https://github.com/Azure/Azure-MachineLearning-ClientLibrary-Python) package to deploy an Azure ML web service directly from within a Python notebook (or other Python environment).> **Note:** The `azureml` package presently works only with Python 2. If your notebook is not currently running Python 2, change it in the menu at the top of the notebook by clicking **Kernel > Change kernel > Python 2**. Create and connect to an Azure ML Studio workspaceThe `azureml` package is installed by default with Azure Notebooks, so we don't have to worry about that. It uses an Azure ML Studio workspace ID and authorization token to connect your notebook to the workspace; you will obtain the ID and token by following these steps:1. Open [Azure ML Studio](https://studio.azureml.net) in a new browser tab and sign in with a Microsoft account. Azure ML Studio is free and does not require an Azure subscription. Once signed in with your Microsoft account (the same credentials you’ve used for Azure Notebooks), you're in your “workspace.”2. On the left pane, click **Settings**. 3. On the **Name** tab, the **Workspace ID** field contains your workspace ID. Copy that ID into the `workspace_id` value in the code cell in Step 5 of the notebook below. 4. Click the **Authorization Tokens** tab, and then copy either token into the `authorization_token` value in the code cell in Step 5 of the notebook. 5. 5. Run the code cell below; if it runs without error, you're ready to continue.
###Code
from azureml import Workspace
# Replace the values with those from your own Azure ML Studio instance; see Prerequisites
# The workspace_id is a string of hexadecimal characters; the token is a long string of random characters.
workspace_id = 'your_workspace_id'
authorization_token = 'your_auth_token'
ws = Workspace(workspace_id, authorization_token)
###Output
_____no_output_____
###Markdown
Explore forest fire dataLet’s look at a meteorological dataset collected by Cortez and Morais for 2007 to study the burned area of forest fires in the northeast region of Portugal.> P. Cortez and A. Morais. A Data Mining Approach to Predict Forest Fires using Meteorological Data. In J. Neves, M. F. Santos and J. Machado Eds., New Trends in Artificial Intelligence, Proceedings of the 13th EPIA 2007 - Portuguese Conference on Artificial Intelligence, December, Guimaraes, Portugal, pp. 512-523, 2007. APPIA, ISBN-13 978-989-95618-0-9. The dataset contains the following features:- **`X`**: x-axis spatial coordinate within the Montesinho park map: 1 to 9- **`Y`**: y-axis spatial coordinate within the Montesinho park map: 2 to 9- **`month`**: month of the year: "1" to "12" jan-dec- **`day`**: day of the week: "1" to "7" sun-sat- **`FFMC`**: FFMC index from the FWI system: 18.7 to 96.20- **`DMC`**: DMC index from the FWI system: 1.1 to 291.3 - **`DC`**: DC index from the FWI system: 7.9 to 860.6 - **`ISI`**: ISI index from the FWI system: 0.0 to 56.10- **`temp`**: temperature in Celsius degrees: 2.2 to 33.30- **`RH`**: relative humidity in %: 15.0 to 100- **`wind`**: wind speed in km/h: 0.40 to 9.40 - **`rain`**: outside rain in mm/m2 : 0.0 to 6.4 - **`area`**: the burned area of the forest (in ha): 0.00 to 1090.84 Let's load the dataset and visualize the area that was burned in relation to the temperature in that region.
###Code
import pandas as pd
df = pd.DataFrame(pd.read_csv('../Data/forestfires.csv'))
%matplotlib inline
from ggplot import *
ggplot(aes(x='temp', y='area'), data=df) + geom_line() + geom_point()
###Output
_____no_output_____
###Markdown
Intuitively, the hotter the weather, the more hectares burned in forest fires. Transfer your data to Azure ML StudioWe have our data, but how do we get it into Azure ML Studio in order to use it there? That is where the `azureml` package comes in. It enables us to load data and models into Azure ML Studio from an Azure Notebook (or any Python environment).The first code cell of this notebook is what establishes the connection with *your* Azure ML Studio account. Now that you have your notebook talking to Azure ML Studio, you can export your data to it:
###Code
from azureml import DataTypeIds
dataset = ws.datasets.add_from_dataframe(
dataframe=df,
data_type_id=DataTypeIds.GenericCSV,
name='Forest Fire Data',
description='Paulo Cortez and Aníbal Morais (Univ. Minho) @ 2007'
)
###Output
_____no_output_____
###Markdown
After running the code above, you can see the dataset listed in the **Datasets** section of the Azure Machine Learning Studio workspace. (**Note**: You might need to switch between browser tabs and refresh the page in order to see the dataset.)It is also straightforward to list the datasets available in the workspace and transfer datasets from the workspace to the notebook:
###Code
print('\n'.join([i.name for i in ws.datasets if not i.is_example])) # only list user-created datasets
###Output
_____no_output_____
###Markdown
You can also interact with and examine the dataset in Azure ML Studio directly from your notebook:
###Code
# Read some more of the metadata
ds = ws.datasets['Forest Fire Data']
print(ds.name)
print(ds.description)
print(ds.family_id)
print(ds.data_type_id)
print(ds.created_date)
print(ds.size)
# Read the contents
df2 = ds.to_dataframe()
df2.head()
###Output
_____no_output_____
###Markdown
Create your modelWe're now back into familiar territory: prepping data for the model and fitting the model. To keep it interesting, we'll use the scikit-learn `train_test_split()` function with a slight change of parameters to select 75 percent of the data points for training and 25 percent for validation (testing).
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df[['wind','rain','month','RH']],
df['temp'],
test_size=0.25,
random_state=42
)
###Output
_____no_output_____
###Markdown
Did you see what we did there? Rather than select all of the variables for the model, we were more selective and just chose windspeed, rainfall, month, and relative humidity in order to predict temperature.Fit scikit-learn's `DecisionTreeRegressor` model using the training data. This algorithm is a combination of the linear regression and decision tree classification that you worked with in Section 6.
###Code
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import r2_score
regressor = DecisionTreeRegressor(random_state=42)
regressor.fit(X_train, y_train)
y_test_predictions = regressor.predict(X_test)
print('R^2 for true vs. predicted test set forest temperature: {:0.2f}'.format(r2_score(y_test, y_test_predictions)))
# Play around with this algorithm.
# Can you get better results changing the variables you select for the training and test data?
# What if you look at different variables for the response?
###Output
_____no_output_____
###Markdown
Deploy your model as a web serviceThis is the important part. Once deployed as a web service, your model can be accessed from anywhere. This means that rather than refit a model every time you need a new prediction for a business or humanitarian use case, you can send the data to the pre-fitted model and get back a prediction.First, deploy the model as a predictive web service. To do so, create a wrapper function that takes input data as an argument and calls `predict()` with your trained model and this input data, returning the results.
###Code
from azureml import services
@services.publish(workspace_id, authorization_token)
@services.types(wind=float, rain=float, month=int, RH=float)
@services.returns(float)
# The name of your web service is set to this function's name
def forest_fire_predictor(wind, rain, month, RH):
return regressor.predict([wind, rain, month, RH])
# Hold onto information about your web service so
# you can call it within the notebook later
service_url = forest_fire_predictor.service.url
api_key = forest_fire_predictor.service.api_key
help_url = forest_fire_predictor.service.help_url
service_id = forest_fire_predictor.service.service_id
###Output
_____no_output_____
###Markdown
You can also go to the **Web Services** section of your Azure ML Studio workspace to see the predictive web service running there. Consuming the web serviceNext, consume the web service. To see if this works, try it here from the notebook session in which the web service was created. Just call the predictor directly:
###Code
forest_fire_predictor.service(5.4, 0.2, 9, 22.1)
###Output
_____no_output_____
###Markdown
At any later time, you can use the stored API key and service URL to call the service. In the example below, data can be packaged in JavaScript Object Notation (JSON) format and sent to the web service.
###Code
import urllib2
import json
data = {"Inputs": {
"input1": {
"ColumnNames": [ "wind", "rain", "month", "RH"],
"Values": [["5.4", "0.2", "9", "22.1"]]
}
}, # Specified feature values
"GlobalParameters": {}
}
body = json.dumps(data)
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
req = urllib2.Request(service_url, body, headers)
try:
response = urllib2.urlopen(req)
result = json.loads(response.read()) # load JSON-formatted string response as dictionary
print(result['Results']['output1']['value']['Values'][0][0]) # Get the returned prediction
except urllib2.HTTPError, error:
print("The request failed with status code: " + str(error.code))
print(error.info())
print(json.loads(error.read()))
###Output
_____no_output_____ |
Notebooks/stock_analysis.ipynb | ###Markdown
Top 5 Stocks Analysis - Stock YHOO- Stock APA- Stock FTR- Stock XEC- Stock AKAM
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
import pathlib
# Read csv file - top stocks
fundamentals = pd.read_csv('../data/fundamentals.csv')
fundamentals
###Output
_____no_output_____
###Markdown
Quick Ratio The higher the ratio result, the better a company's liquidity and financial health; the lower the ratio, the more likely the company will struggle with paying debts. Quick ratio plot for YHOO
###Code
yahoo_quick_ratio = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'YHOO' ,
['Period Ending','Quick Ratio']
]
yahoo_quick_ratio
x_axis = yahoo_quick_ratio['Period Ending']
y_axis = yahoo_quick_ratio['Quick Ratio']
plt.plot(x_axis,y_axis)
plt.title("YHOO Quick Ratio 2012-2015")
plt.xlabel('Date')
plt.ylabel('Quick Ratio')
plt.savefig("../visuals/YHOO_quickratio.png")
plt.show()
###Output
_____no_output_____
###Markdown
Quick ratio plot for FTR
###Code
ftr_quick_ratio = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'FTR' ,
['Period Ending','Quick Ratio']
]
ftr_quick_ratio
x_axis = ftr_quick_ratio['Period Ending']
y_axis = ftr_quick_ratio['Quick Ratio']
plt.plot(x_axis,y_axis)
plt.title("FTR Quick Ratio 2012-2015")
plt.xlabel('Date')
plt.ylabel('Quick Ratio')
plt.savefig("../visuals/AKAM_quickratio.png")
plt.show()
## Quick ratio plot for FTR
###Output
_____no_output_____
###Markdown
Quick ratio plot for AKAM
###Code
akam_quick_ratio = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'AKAM' ,
['Period Ending','Quick Ratio']
]
akam_quick_ratio
x_axis = akam_quick_ratio['Period Ending']
y_axis = akam_quick_ratio['Quick Ratio']
plt.plot(x_axis,y_axis)
plt.title("AKAM Quick Ratio 2012-2015")
plt.xlabel('Date')
plt.ylabel('Quick Ratio')
plt.savefig("../visuals/AKAM_quickratio.png")
plt.show()
###Output
_____no_output_____
###Markdown
Quick ratio plot for EW
###Code
ew_quick_ratio = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'EW' ,
['Period Ending','Quick Ratio']
]
ew_quick_ratio
x_axis = ew_quick_ratio['Period Ending']
y_axis = ew_quick_ratio['Quick Ratio']
plt.plot(x_axis,y_axis)
plt.title("EW Quick Ratio 2012-2015")
plt.xlabel('Date')
plt.ylabel('Quick Ratio')
plt.savefig("../visuals/EW_quickratio.png")
plt.show()
###Output
_____no_output_____
###Markdown
Quick ratio plot for EBAY
###Code
ebay_quick_ratio = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'EBAY' ,
['Period Ending','Quick Ratio']
]
ebay_quick_ratio
x_axis = ebay_quick_ratio['Period Ending']
y_axis = ebay_quick_ratio['Quick Ratio']
plt.plot(x_axis,y_axis)
plt.title("EBAY Quick Ratio 2012-2015")
plt.xlabel('Date')
plt.ylabel('Quick Ratio')
plt.savefig("../visuals/EBAY_quickratio.png")
plt.show()
###Output
_____no_output_____
###Markdown
EPSThe higher the earnings per share of a company, the better is its profitability EPS plot for YHOO
###Code
yahoo_eps = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'YHOO' ,
['Period Ending','Earnings Per Share']
]
yahoo_eps
x_axis = yahoo_eps['Period Ending']
y_axis = yahoo_eps['Earnings Per Share']
plt.plot(x_axis,y_axis)
plt.title("YHOO EPS 2012-2015")
plt.xlabel('Date')
plt.ylabel('Ernings')
plt.savefig("../visuals/YHOO_eps.png")
plt.show()
###Output
_____no_output_____
###Markdown
EPS plot for FTR
###Code
ftr_eps = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'FTR' ,
['Period Ending','Earnings Per Share']
]
ftr_eps
x_axis = ftr_eps['Period Ending']
y_axis = ftr_eps['Earnings Per Share']
plt.plot(x_axis,y_axis)
plt.title("FTR EPS 2012-2015")
plt.xlabel('Date')
plt.ylabel('Ernings')
plt.savefig("../visuals/FTR_eps.png")
plt.show()
###Output
_____no_output_____
###Markdown
EPS plot for AKAM
###Code
akam_eps = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'AKAM' ,
['Period Ending','Earnings Per Share']
]
akam_eps
x_axis = akam_eps['Period Ending']
y_axis = akam_eps['Earnings Per Share']
plt.plot(x_axis,y_axis)
plt.title("AKAM EPS 2012-2015")
plt.xlabel('Date')
plt.ylabel('Ernings')
plt.savefig("../visuals/AKAM_eps.png")
plt.show()
###Output
_____no_output_____
###Markdown
EPS plot for EW
###Code
ew_eps = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'EW' ,
['Period Ending','Earnings Per Share']
]
ew_eps
x_axis = ew_eps['Period Ending']
y_axis = ew_eps['Earnings Per Share']
plt.plot(x_axis,y_axis)
plt.title("EW EPS 2012-2015")
plt.xlabel('Date')
plt.ylabel('Ernings')
plt.savefig("../visuals/EW_eps.png")
plt.show()
###Output
_____no_output_____
###Markdown
EPS plot for EBAY
###Code
ebay_eps = fundamentals.loc[
fundamentals['Ticker Symbol'] == 'EBAY' ,
['Period Ending','Earnings Per Share']
]
ebay_eps
x_axis = ebay_eps['Period Ending']
y_axis = ebay_eps['Earnings Per Share']
plt.plot(x_axis,y_axis)
plt.title("EBAY EPS 2012-2015")
plt.xlabel('Date')
plt.ylabel('Ernings')
plt.savefig("../visuals/EBAY_eps.png")
plt.show()
###Output
_____no_output_____
###Markdown
Correlation between stocks - ROIA correlation coefficient of 1 indicates a perfect positive correlation between the prices of two stocks, meaning the stocks always move the same direction by the same amount. A coefficient of -1 indicates a perfect negative correlation, meaning that the stocks have historically always moved in the opposite direction.Correlation is used in modern portfolio theory to include diversified assets that can help reduce the overall risk of a portfolio. One of the main criticisms of MPT, however, is that it assumes the correlation between assets is static over time. In reality, correlations often shift, especially during periods of higher volatility. In short, while correlation has some predictive value, the measure has limitations in its use. (Investopedia)
###Code
top_stocks_final_roi = pd.read_csv('../data/top_stocks_final_roi.csv')
roi = top_stocks_final_roi.set_index('Ticker Symbol').loc[
:,
'ROI %'
]
roi_df = pd.DataFrame(roi)
roi_df
top_stocks_final_roi
plt.scatter(
x=roi_df.loc[:,:],
y=roi_df.loc[:,"ROI %"]
)
plt.xlabel('Income Per Capita')
plt.ylabel('Average Alcohol Consumed Per Person Per Year (L)')
plt.show()
###Output
_____no_output_____
###Markdown
Stock Growth
###Code
stock_final_price = pd.read_csv("../data/clode_final_df.csv").rename(columns={"Unnamed: 0": "Date"}).set_index('Date')
stock_final_price
# Stock Price Growth
stock_final_price.plot(
xlabel = 'Date',
ylabel = 'Stock Price',
title = 'Stock Price Growth',
xlim = 0.5
)
plt.savefig("../visuals/stock_growth.png")
plt.show()
###Output
_____no_output_____ |
notebooks/NQ_devset_wrangle.ipynb | ###Markdown
Load Data
###Code
jsonfilename = "../data/nq/v1.0-simplified_nq-dev-all.jsonl"
def convert_nq_dev_to_squad_format(filepath):
'''
Load NQ dev set from disk, simplify each record, convert them to SQuAD format
'''
nq_examples = []
with open(jsonfilename, 'rb') as f:
for i, line in enumerate(tqdm(f)):
simp_example = simplify_nq_example(json.loads(line.decode('utf-8')))
answers, yes_no_flag = get_short_answers_from_span(simp_example)
if yes_no_flag:
# exclude questions with any annotation indicating yes/no question
print(f'Found a yes/no: {i}')
continue
clean_record = {'qas_id': simp_example['example_id'],
'title': extract_wiki_title(simp_example['document_url']),
'question_text': simp_example['question_text'],
'answers': answers,
'is_impossible': True if len(answers)==0 else False}
nq_ex = NQSquadExample(**clean_record)
nq_examples.append(nq_ex)
return nq_examples
def get_short_answers_from_span(simplified_example):
'''
Extracts short answer text from a simplified NQ example using the short answer span and document text and
returns flag if any annotation indicates a yes/no answer
Note:
1. Annotations that have multipart answers (more than 1 short answer) are dropped from list
of short answers
2. Answers with many tokens often resemble extractive snippets rather than canonical answers,
so we discard answers with more than 5 tokens. (https://arxiv.org/pdf/1906.00300.pdf)
'''
answers = []
yes_no_flag = False
for annotation in simplified_example['annotations']:
# check for yes/no questions
if annotation['yes_no_answer'] != 'NONE':
yes_no_flag = True
# extract short answers
if len(annotation['short_answers']) > 1 or len(annotation['short_answers']) == 0:
continue
else:
short_answer_span = annotation['short_answers'][0]
short_answer = " ".join(simplified_example['document_text'].split(" ")\
[short_answer_span['start_token']:short_answer_span['end_token']])
if len(short_answer.split(' ')) > 5:
continue
answers.append(short_answer)
return answers, yes_no_flag
def extract_wiki_title(document_url):
'''
This function applies a regular expression to an input wikipedia article URL
to extract and return the article title.
Args:
document_url (string)
Returns:
title (string) - article title
'''
pattern = 'title=(.*?)&'
try:
title = re.search(pattern, document_url).group(1)
except AttributeError:
title = 'No Title Found'
return title
class NQSquadExample(object):
"""
A single dev example for the NQ dataset represented in SQuAD format
Args:
qas_id: The example's unique identifier
question_text: The question string
title: The title of the Wikipedia article
answers: None by default, this is used during evaluation. Holds answers as well as their start positions.
is_impossible: False by default, set to True if the example has no possible answer.
"""
def __init__(
self,
qas_id,
question_text,
title,
answers,
is_impossible,
):
self.qas_id = qas_id
self.question_text = question_text
self.title = title
self.is_impossible = is_impossible
self.answers = answers
nq_examples = convert_nq_dev_to_squad_format(jsonfilename)
len(nq_examples)
impossible_count = 0
for ex in nq_examples:
if ex.is_impossible == True:
impossible_count += 1
impossible_count
4470/len(nq_examples)
###Output
_____no_output_____
###Markdown
58% of NQ dev questions have no answer
###Code
nq_examples[0].is_impossible
nq_examples[0].answers
###Output
_____no_output_____ |
TEMA-2/Clase14_ContinuacionReduccionVarianza.ipynb | ###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Función para generar variables alestorias exponenciales
xi = lambda ui: -np.log(ui)
# Generación de Números aleatorios
N = 10
ri = np.random.rand(N)
# Media de observaciones aleatorias
m_rand = xi(ri).mean()
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = np.random.rand(int(N/2))
xi_c = 1 - ri_c
# Media de observaciones complementarias
m_comple = xi(np.concatenate([ri_c, xi_c])).mean()
print('Media de observaciones usando el M.N.C = ', m_comple)
###Output
Media de observaciones aleatorias = 0.5801270297247335
Media de observaciones usando el M.N.C = 0.8353812035280248
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1 - U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.title(r'$u_1 = e^{u_2}$')
plt.show()
###Output
_____no_output_____
###Markdown
En el caso de nuestro ejemplo de generación variables aleatorias exponenciales el método de la transformada inversa nos arrojó los siguientes resultados:$$x_i = -\ln u_i \ \rightarrow e^{-x_i} = u_i$$Lo cuál graficándolo nos arroja el siguiente resultado:
###Code
xi = np.random.rand(10)
ui = np.exp(-xi)
plt.plot(xi,ui,'o')
plt.title(r'$u_i = e^{-x_i}$')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado.>**Característica de las funciones monótonas:** Es decir una función es monótona cuando es creciente o decreciente en todo su dominio. Ejemplo donde el método de números complementarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 2}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$> **Recordar la expresión para calcular la esperanza:**> $$ \mathbb {E} [X]=\int_{-\infty }^{\infty }x f(x)dx $$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2} \rightarrow \text{expresión de la media para $U\sim U[0,1]$}$$$$\begin{aligned}E[h(U)h(1-U)]&= \int_0^{1/2} h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du + \int_{1/2}^1 h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du \\E[h(u)h(1-u)] & = \int_0^{1/2} 2u\cdot(2-2(1-u))du + \int_{1/2}^1 2(1-u)\cdot(2-2u)du \\&= \int_0^{1/2} 4u^2du + \int_{1/2}^1 (2-2u)^2du = \frac{1}{3}\end{aligned}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior 
###Code
h1 = lambda x: 0 if x<0 else (2 * x if 0 <= x < 0.5 else
(2 - 2 * x if .5 <= x <= 1 else 0))
def h2(x):
if x<0:
return 0
elif 0 <= x < 0.5:
return 2 * x
elif .5 <= x <= 1:
return 2 - 2 * x
else:
return 0
x = np.arange(-.5, 1.5, .1)
plt.plot(x, list(map(h2, x)))
np.random.seed(514)
# Programar función h(x)
h = lambda x: 0 if x<0 else (2 * x if 0 <= x < 0.5 else
(2 - 2 * x if .5 <= x <= 1 else 0))
# Gráfica de la función h(x)
x = np.arange(-.5,1.5,0.01)
plt.plot(x,list(map(h,x)),label='h(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 20
u1 = np.random.rand(N)
media_montecarlo = np.mean(list(map(h, u1)))
# Aproximación usando método de los números complementarios
# Nota: Para ser justos tome la misma cantidad de términos para efectos de comparación
u2 = np.random.rand(int(N))
u2_c = 1 - u2
media_complementario = np.mean(list(map(h, np.concatenate([u2, u2_c]))))
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',(0 + .5 + 1)/3 )
# Distrubución triangular (media)
# https://en.wikipedia.org/wiki/Triangular_distribution
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
# Cantidad de términos
N = 10
# Función inversa
f_inv = lambda u:
# MÉTODOS PARA APROXIMAR LA MEDIA DE LA DISTRIBUCIÓN
# 1. Montecarlo crudo
# 2. Método estratificado
# 3. Método números complementarios
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,3)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Función para generar variables alestorias exponenciales
xi = lambda ri: -np.log(ri)
# Generación de Números aleatorios
ri =
# Media de observaciones aleatorias (Montecarlo estándar)
m_rand =
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c =
xi_c =
# Media de observaciones complementarias
m_comple =
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1 - U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.title(r'$u_1 = e^{u_2}$')
plt.show()
###Output
_____no_output_____
###Markdown
En el caso de nuestro ejemplo de generación variables aleatorias exponenciales el método de la transformada inversa nos arrojó los siguientes resultados:$$x_i = -\ln u_i \ \rightarrow e^{-x_i} = u_i$$Lo cuál graficándolo nos arroja el siguiente resultado:
###Code
xi = np.random.rand(10)
ui = np.exp(-xi)
plt.plot(xi,ui,'o')
plt.title(r'$u_i = e^{-x_i}$')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado.>**Característica de las funciones monótonas:** Es decir una función es monótona cuando es creciente o decreciente en todo su dominio. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 2}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$> **Recordar la expresión para calcular la esperanza:**> $$ \mathbb {E} [X]=\int_{-\infty }^{\infty }x f(x)dx $$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2} \rightarrow \text{expresión de la media para $U\sim U[0,1]$}$$$$\begin{aligned}E[h(U)h(1-U)]&= \int_0^{1/2} h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du + \int_{1/2}^1 h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du \\E[h(u)h(1-u)] & = \int_0^{1/2} 2u\cdot(2-2(1-u))du + \int_{1/2}^1 2(1-u)\cdot(2-2u)du \\&= \int_0^{1/2} 4u^2du + \int_{1/2}^1 (2-2u)^2du = \frac{1}{3}\end{aligned}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
np.random.seed(514)
# Programar función h(x)
# Gráfica de la función h(x)
x = np.arange(-.5,1.5,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
# Aproximación usando método de los números complementarios
# Nota: Para ser justos tome la misma cantidad de términos para efectos de comparación
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',(0 + .5 + 1)/3 )
# Distrubución triangular (media)
# https://en.wikipedia.org/wiki/Triangular_distribution
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
# Cantidad de términos
N = 10
# Función inversa
f_inv = lambda u:
# MÉTODOS PARA APROXIMAR LA MEDIA DE LA DISTRIBUCIÓN
# 1. Montecarlo crudo
# 2. Método estratificado
# 3. Método números complementarios
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,3)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,3)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Función para generar variables alestorias exponenciales
xi = lambda ui: -np.log(ui)
# Generación de Números aleatorios
N = 10
ri = np.random.rand(N)
# Media de observaciones aleatorias
m_rand = xi(ri).mean()
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = np.random.rand(int(N/2))
xi_c = 1 - ri_c
# Media de observaciones complementarias
m_comple = xi(np.concatenate([ri_c, xi_c])).mean()
print('Media de observaciones usando el M.N.C = ', m_comple)
###Output
Media de observaciones aleatorias = 0.5801270297247335
Media de observaciones usando el M.N.C = 0.8353812035280248
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1 - U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.title(r'$u_1 = e^{u_2}$')
plt.show()
###Output
_____no_output_____
###Markdown
En el caso de nuestro ejemplo de generación variables aleatorias exponenciales el método de la transformada inversa nos arrojó los siguientes resultados:$$x_i = -\ln u_i \ \rightarrow e^{-x_i} = u_i$$Lo cuál graficándolo nos arroja el siguiente resultado:
###Code
xi = np.random.rand(10)
ui = np.exp(-xi)
plt.plot(xi,ui,'o')
plt.title(r'$u_i = e^{-x_i}$')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado.>**Característica de las funciones monótonas:** Es decir una función es monótona cuando es creciente o decreciente en todo su dominio. Ejemplo donde el método de números complementarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 2}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$> **Recordar la expresión para calcular la esperanza:**> $$ \mathbb {E} [X]=\int_{-\infty }^{\infty }x f(x)dx $$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2} \rightarrow \text{expresión de la media para $U\sim U[0,1]$}$$$$\begin{aligned}E[h(U)h(1-U)]&= \int_0^{1/2} h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du + \int_{1/2}^1 h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du \\E[h(u)h(1-u)] & = \int_0^{1/2} 2u\cdot(2-2(1-u))du + \int_{1/2}^1 2(1-u)\cdot(2-2u)du \\&= \int_0^{1/2} 4u^2du + \int_{1/2}^1 (2-2u)^2du = \frac{1}{3}\end{aligned}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior 
###Code
h1 = lambda x: 0 if x<0 else (2 * x if 0 <= x < 0.5 else
(2 - 2 * x if .5 <= x <= 1 else 0))
def h2(x):
if x<0:
return 0
elif 0 <= x < 0.5:
return 2 * x
elif .5 <= x <= 1:
return 2 - 2 * x
else:
return 0
x = np.arange(-.5, 1.5, .1)
plt.plot(x, list(map(h2, x)))
np.random.seed(514)
# Programar función h(x)
h = lambda x: 0 if x<0 else (2 * x if 0 <= x < 0.5 else
(2 - 2 * x if .5 <= x <= 1 else 0))
# Gráfica de la función h(x)
x = np.arange(-.5,1.5,0.01)
plt.plot(x,list(map(h,x)),label='h(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 20
u1 = np.random.rand(N)
media_montecarlo = np.mean(list(map(h, u1)))
# Aproximación usando método de los números complementarios
# Nota: Para ser justos tome la misma cantidad de términos para efectos de comparación
u2 = np.random.rand(int(N))
u2_c = 1 - u2
media_complementario = np.mean(list(map(h, np.concatenate([u2, u2_c]))))
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',(0 + .5 + 1)/3 )
# Distrubución triangular (media)
# https://en.wikipedia.org/wiki/Triangular_distribution
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
# Cantidad de términos
N = 10
# Función inversa
f_inv = lambda u:
# MÉTODOS PARA APROXIMAR LA MEDIA DE LA DISTRIBUCIÓN
# 1. Montecarlo crudo
# 2. Método estratificado
# 3. Método números complementarios
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.concatenate([np.random.rand(10), np.random.rand(10)]).shape
np.random.seed(95555)
N = 10
# Función para generar variables alestorias exponenciales
xi = lambda ri: -np.log(ri)
# Generación de Números aleatorios
ri = np.random.rand(N)
# Media de observaciones aleatorias (Montecarlo estándar)
m_rand = xi(ri).mean()
print('Media de observaciones aleatorias con montecarlo estándar = ', m_rand)
# Números aleatorios complementarios
ri_c = 1 - ri
xi_c = xi(np.concatenate([ri, ri_c]))
# Media de observaciones complementarias
m_comple = xi_c.mean()
print('Media de observaciones aleatorias con M.N.C = ', m_comple)
###Output
Media de observaciones aleatorias con montecarlo estándar = 0.5801270297247335
Media de observaciones aleatorias con M.N.C = 0.9776997833969057
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1 - U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.title(r'$u_1 = e^{u_2}$')
plt.show()
###Output
_____no_output_____
###Markdown
En el caso de nuestro ejemplo de generación variables aleatorias exponenciales el método de la transformada inversa nos arrojó los siguientes resultados:$$x_i = -\ln u_i \ \rightarrow e^{-x_i} = u_i$$Lo cuál graficándolo nos arroja el siguiente resultado:
###Code
xi = np.random.rand(10)
ui = np.exp(-xi)
plt.plot(xi,ui,'o')
plt.title(r'$u_i = e^{-x_i}$')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado.>**Característica de las funciones monótonas:** Es decir una función es monótona cuando es creciente o decreciente en todo su dominio. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 2}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$> **Recordar la expresión para calcular la esperanza:**> $$ \mathbb {E} [X]=\int_{-\infty }^{\infty }x f(x)dx $$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2} \rightarrow \text{expresión de la media para $U\sim U[0,1]$}$$$$\begin{aligned}E[h(U)h(1-U)]&= \int_0^{1/2} h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du + \int_{1/2}^1 h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du \\E[h(u)h(1-u)] & = \int_0^{1/2} 2u\cdot(2-2(1-u))du + \int_{1/2}^1 2(1-u)\cdot(2-2u)du \\&= \int_0^{1/2} 4u^2du + \int_{1/2}^1 (2-2u)^2du = \frac{1}{3}\end{aligned}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
np.random.seed(54)
# Programar función h(x)
h = lambda x: 0 if x < 0 else (2 * x if 0 <= x < 0.5 else (2 - 2 * x if 0.5 <= x <= 1 else 0))
# Gráfica de la función h(x)
x = np.arange(-.5,1.5,0.01)
plt.plot(x,list(map(h,x)),label='h(x)')
plt.legend()
plt.show()
N = 10
# aproximar el valor de la integral usando montecarlo típico
u1 = np.random.rand(N)
media_montecarlo = np.mean(list(map(lambda u: h(u), u1)))
# Aproximación usando método de los números complementarios
u2 = np.random.rand(int(N/2))
u2_c = 1 - u2
media_complementario = np.mean(list(map(lambda u: h(u), np.concatenate([u2, u2_c]))))
# Nota: Para ser justos tome la misma cantidad de términos para efectos de comparación
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',(0 + .5 + 1)/3 )
# Distrubución triangular (media)
# https://en.wikipedia.org/wiki/Triangular_distribution
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
# Cantidad de términos
N = 10
# Función inversa
f_inv = lambda u:
# MÉTODOS PARA APROXIMAR LA MEDIA DE LA DISTRIBUCIÓN
# 1. Montecarlo crudo
# 2. Método estratificado
# 3. Método números complementarios
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',media_complementario)
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
f_inv = lambda u:1/(1-u)
N = 10
# montecarlo crudo
ui = np.random.rand(N)
m1 = np.mean(f_inv(ui))
# método estratificado
r1 = np.random.uniform(0,0.6,3)
r2 = np.random.uniform(0.6,0.9,3)
r3 = np.random.uniform(0.9,1,4)
w1 = (3/N)/0.6
w2 = (3/N)/0.3
w3 = (4/N)/0.1
r = [r1,r2,r3]
w = [w1,w2,w3]
xi = list(map(f_inv,r))
xi_es = list(map(lambda x,w:x/w,xi,w))
m2 = np.concatenate(xi_es).mean()
# método números complementarios
ui_c = 1-ui
m_c = f_inv(ui_c).mean()
m3 = (m1+m_c)/2
m3
u_c = np.concatenate([ui,ui_c])
m4 = f_inv(u_c).mean()
m3,m4
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,50)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
N = 10
# Función para generar variables alestorias exponenciales
xi = lambda ri: -np.log(ri)
# Generación de Números aleatorios
ri = np.random.rand(N)
# Media de observaciones aleatorias (Montecarlo estándar)
m_rand = xi(ri).mean()
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1 - ri
xi_c = xi(ri_c)
# Media de observaciones complementarias
m_comple = xi_c.mean()
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.5801270297247335
Media de observaciones complementarias = 1.3752725370690784
La media estimada con el M.N.C es = 0.9776997833969059
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1 - U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.title(r'$u_1 = e^{u_2}$')
plt.show()
###Output
_____no_output_____
###Markdown
En el caso de nuestro ejemplo de generación variables aleatorias exponenciales el método de la transformada inversa nos arrojó los siguientes resultados:$$x_i = -\ln u_i \ \rightarrow e^{-x_i} = u_i$$Lo cuál graficándolo nos arroja el siguiente resultado:
###Code
xi = np.random.rand(10)
ui = np.exp(-xi)
plt.plot(xi,ui,'o')
plt.title(r'$u_i = e^{-x_i}$')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado.>**Característica de las funciones monótonas:** Es decir una función es monótona cuando es creciente o decreciente en todo su dominio. Ejemplo donde el método de números complementarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 2}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complementarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$> **Recordar la expresión para calcular la esperanza:**> $$ \mathbb {E} [X]=\int_{-\infty }^{\infty }x f(x)dx $$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2} \rightarrow \text{expresión de la media para $U\sim U[0,1]$}$$$$\begin{aligned}E[h(U)h(1-U)]&= \int_0^{1/2} h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du + \int_{1/2}^1 h(U)h(1-U)\underbrace{f(x)}_{U\sim [0,1] = 1} du \\E[h(u)h(1-u)] & = \int_0^{1/2} 2u\cdot(2-2(1-u))du + \int_{1/2}^1 2(1-u)\cdot(2-2u)du \\&= \int_0^{1/2} 4u^2du + \int_{1/2}^1 (2-2u)^2du = \frac{1}{3}\end{aligned}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común 
###Code
h = lambda x: 0 if x < 0 else (2 * x if 0 <= x < 0.5 else(2 - 2 * x if 0.5 <= x <= 1 else 0))
N = 10
u1 = np.random.rand(N)
media_montecarlo = np.mean(list(map(h, u1)))
media_montecarlo
###Output
_____no_output_____
###Markdown
Validación del resultado anterior
###Code
np.random.seed(5148)
# Programar función h(x)
h = lambda x: 0 if x < 0 else (2 * x if 0 <= x < 0.5 else(2 - 2 * x if 0.5 <= x <= 1 else 0))
# Gráfica de la función h(x)
x = np.arange(-.5,1.5,0.01)
plt.plot(x,list(map(h,x)),label='h(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 10
u1 = np.random.rand(N)
media_montecarlo = np.mean(list(map(h, u1)))
# Aproximación usando método de los números complementarios
# Nota: Para ser justos tome la misma cantidad de términos para efectos de comparación
u2 = np.random.rand(int(N/2))
u2_c = 1 - u2
media_complementario = np.mean(list(map(h, np.concatenate([u2, u2_c]))))
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',(0 + .5 + 1)/3 )
# Distrubución triangular (media)
# https://en.wikipedia.org/wiki/Triangular_distribution
(media_montecarlo - 0.5) / 0.5, (media_complementario -0.5) / 0.5
###Output
_____no_output_____
###Markdown
¿Por qué fallo el método en este ejemplo?Se demostró que las variables $U$ y $1-U$ están negativamente correlacionadas, pero en general no se puede garantizar que $X_1$ y $X_2$ cumplan esta propiedad en general.Para estar seguros de que la correlación negativa en los números aleatorios de entrada produce una correlación negativa en la salida observada, debemos exigir una relación monótona entre ellos. La función exponencial es una función monótona, pero la función triángulo del segundo ejemplo no lo es. Ejemplo de aplicación: Ejercicio tomado de: Introduction to Operations Research, 9ª ed. pag, 1148.
###Code
# Cantidad de términos
N = 10
# Función inversa
f_inv = lambda u:
# MÉTODOS PARA APROXIMAR LA MEDIA DE LA DISTRIBUCIÓN
# 1. Montecarlo crudo
# 2. Método estratificado
# 3. Método números complementarios
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,3)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',media_complementario)
###Output
_____no_output_____
###Markdown
Continuación técnicas de reducción de varianza b). Números complementarios- Se tiene un cierto número de observaciones o valores aleatorios- Estos valores en ocasiones puede tener algún sesgo no deseado, además de no haber sido obligado o intencionado, como ejemplo supongamos que se hacen 10 observaciones donde los valores posibles son valores entre [0,1] y en todos los casos se obtuvieron datos menores a 0.5, lo cual sería inusual.- Obtener más observaciones puede ser costoso, no posible bajo las mismas condiciones o simplemente tiene un elevado costo computacional, así que lo que sugiere la técnica de “números complementarios” es obtener otro tanto de valores utilizando la fórmula > Aleatorio nuevo = Límite superior aleatorio generado - Aleatorio generado + Límite inferior aleatorio generado.> **Ejemplo:** si $x\sim U[a,b]$ el número complementario para este número aleatorio es>$$x_{comp}=b-x+a$$> *Caso particular a=0,b=1* $$x_{comp}=1-x$$- Estos valores le dan equilibrio forzado a las observaciones o números aleatorios y permiten hacer una evaluación del proceso con valores que presentarán menor varianza.- Además que se logra obtener el doble de números respecto a los observados para simular el proceso. Ejemplo de ilustración
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle # Librería para hacer ciclos
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
cycol = cycle('bgrcmk') # variable que contiene los colores a graficar
a = 2; b = 8
x = np.random.uniform(a,b,4)
xc = b - x + a
# print(x,xc)
for i in range(len(x)):
c = next(cycol)
plt.plot(x[i],0,'>', c=c)
plt.plot(xc[i],0,'o',c=c)
plt.hlines(0,a,b)
plt.xlim(a,b)
plt.show()
###Output
_____no_output_____
###Markdown
Ejemplo de aplicaciónTomando como base el ejemplo de generación de números aleatorios exponenciales visto la clase pasada, ilustraremos este método.
###Code
np.random.seed(95555)
# Generación de Números aleatorios
ri = np.random.rand(10)
xi = -np.log(ri)
# Media de observaciones aleatorias
m_rand = np.mean(xi)
print('Media de observaciones aleatorias = ', m_rand)
# Números aleatorios complementarios
ri_c = 1-ri
xi_c = -np.log(ri_c)
# Media de observaciones complementarias
m_comple = np.mean(xi_c)
print('Media de observaciones complementarias = ', m_comple)
m_estimada = (m_rand+m_comple)/2
print('La media estimada con el M.N.C es = ',m_estimada)
###Output
Media de observaciones aleatorias = 0.580127029725
Media de observaciones complementarias = 1.37527253707
La media estimada con el M.N.C es = 0.977699783397
###Markdown
Análisis: ¿Por qué el método funciona? RecordarAhora analicemos matemáticamente el efecto que esta sucediendo con este método.Recordemos la expresión de la varianza del estimador de media (promedio):donde $$\rho _{X,Y}={\sigma _{XY} \over \sigma _{X}\sigma _{Y}}={E[(X-\mu _{X})(Y-\mu _{Y})] \over \sigma _{X}\sigma _{Y}}= {cov(X,Y) \over \sigma_X \sigma_Y}$$es el coeficiente de correlación de Pearson, y su valor varía en el intervalo [-1,1], indicando el signo el sentido de la relación. En la siguiente imagen se ilustra, varios grupos de puntos (x, y), con el coeficiente de correlación para cada grupo  - la **covarianza** es un valor que indica el grado de variación conjunta de dos variables aleatorias respecto a sus medias. Es el dato básico para determinar si existe una dependencia entre ambas variables.$$Cov(X,Y)=E[XY]-E[X]E[Y]$$- **coeficiente de correlación de Pearson** es una medida de la relación lineal entre dos variables aleatorias cuantitativas. A diferencia de la covarianza, la correlación de Pearson es independiente de la escala de medida de las variables Ahora recordemos el promedio de dos observaciones viene dado por la siguiente expresión:$$X^{(i)} = {X_1+X_2 \over 2}$$Ahora consideremos la media de una muestra $\bar X(n)$ basado en las muestras promediadas $X^{(i)}$, donde su varianza estará dada por:$$\begin{aligned}Var[\bar X(n)]&={Var(X^{(i)})\over n}\\&= {Var(X_1)+Var(X_2)+2Cov(X_1,X_2)\over 4n}\\&= {Var(X)\over 2n}(1+\rho(X_1,X_2))\end{aligned}$$> Concluimos que para poder obtener una reducción de varianza, con respecto al método de monte carlo tradicional, el coeficiente de correlación $\rho(X_1,X_2)<0$. Pero la pregunta entonces es, ¿Cómo podemos inducir una correlación negativa?.**Dibujar en el tablero la relación entre las variables {$U\sim U(0,1)$} y {$1-U$}**
###Code
# Relación entre las variables x1 = U y x2 = 1- U
# x2 = np.random.rand(5)
x2 = np.array([1,.5,0])
x1 = 1-x2
plt.plot(x1,x2,'o-')
plt.show()
###Output
_____no_output_____
###Markdown
El ejemplo de aplicación anterior mostró como el método es bastante sencillo y trabaja bastante bien. Pero, ¿Podemos esperar siempre un patrón similar? Desafortunadamente, la respuesta es no. La razón por la cual el enfoque funciona bastante bien en el ejemplo son dos:1. Existe una fuerte correlación (positiva) entre $U$ y $e^u$ en el intervalo [0,1], porque la función es casi lineal allí. Esto significa que se conserva una fuerte correlación en la entrada de simulación y se convierte en una fuerte correlación en la salida de la simulación. No deberíamos esperar resultados impresionantes con funciones más complicadas y no lineales.
###Code
U1 = np.random.rand(10)
U2 = np.exp(U1)
plt.plot(U1,U2,'o')
plt.show()
###Output
_____no_output_____
###Markdown
2.Otra razón es que la función exponencial es creciente monótona. Como veremos en breve, la monotonicidad es una condición importante para el método de muestreo estratificado. Ejemplo donde el método de números complemantarios puede fallarConsidere la función $h(x)$ definida como:$$h(x)=\begin{cases}0,& x1,\end{cases}$$y supongamos que queremos aproximar la integrar $\int_0^1h(x)dx$ usando monte carlo.Como se puede observar, la función $h(x)$ es un triangulo, y el area encerrada bajo su curva es:$$\int_0^1h(x)dx \equiv E[h(U)]=\int_0^1h(u)\cdot 1 du = {1\over 4}$$Ahora entonces estimemos el valor de esta integral con el método tradicional de monte carlo y con el método de números complementarios$$\textbf{Monte carlo tradicional}\rightarrow X_I=\frac{h(U_1)+h(U_2)}{ 2}$$$$\textbf{Método números complemantarios}\rightarrow X_c={h(U)+h(1-U) \over 2}$$Ahora comparemos las dos varianzas de los dos estimadores:$$Var(X_I)={Var[h(U)]\over 2}\\Var(X_c)={Var[h(U)]\over 2}+{Cov[h(U),h(1-U)]\over 2}$$Para saber que varianza es mayor, encontramos la diferencia de estas dos varianzas, en donde se obtiene:$$\begin{aligned}\Delta &= Var(X_c)-Var(X_I)={Cov[h(U),h(1-U)]\over 2} \\ &={1\over2}\{ E[h(U)h(1-U)]-E[h(U)]E[h(1-U)]\}\end{aligned}$$En este caso, debido a la forma de $h(x)$, tenemos que:$$E[h(U)]=E[h(1-U)]={1\over 2}$$Por lo tanto, $Cov[h(U),h(1-U)]={1\over 3}-{1\over 4}={1\over 12}$ y de esta manera $\Delta ={1\over 24}>0$ y se concluye entonces que la varianza del método de números complementarios es mayor que la varianza del método de monte carlo común Validación del resultado anterior
###Code
def f(x):
if x>=0 and x<=.5:
f = 2*x
elif x>.5 and x<=0:
f = 2-2*x
else:
f = 0
return f
x = np.arange(-1,1,0.01)
plt.plot(x,list(map(f,x)),label='f(x)')
plt.legend()
plt.show()
# aproximar el valor de la integral usando montecarlo típico
N = 100
u1 = np.random.rand(N)
f_u1 = list(map(f,u1))
media_montecarlo = np.mean(f_u1)
# Aproximación usando método de los números complementarios
u2 = 1-u1
f_u2 = list(map(f,u2))
media_complementario = (np.mean(f_u2)+media_montecarlo)/2
print('Media usando montecarlo estándar =',media_montecarlo)
print('Media usando números complementarios =',media_complementario)
print('Media teórica =',media_complementario)
###Output
_____no_output_____ |
pytorch/gpytorch_regression.ipynb | ###Markdown
Gaussian Process Regression using GPyTorch Dependencies
###Code
!pip install gpytorch
###Output
Requirement already satisfied: gpytorch in /usr/local/lib/python3.6/dist-packages (1.0.0)
###Markdown
Function to model:\begin{align}y &= \sin(2\pi x) + \epsilon \;\; (1)\\ \epsilon &\sim \mathcal{N}(0, 0.2) \;\; (2) \end{align}with 100 training examples, and testing on 51 examples.
###Code
import math
import torch
import os
import matplotlib.pyplot as plt
from gpytorch.models import ExactGP
from gpytorch.means import ConstantMean
from gpytorch.kernels import ScaleKernel
from gpytorch.kernels import RBFKernel
from gpytorch.distributions import MultivariateNormal
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.mlls import ExactMarginalLogLikelihood
from gpytorch.settings import fast_pred_var
###Output
_____no_output_____
###Markdown
Setup training data
###Code
# training data is 100 points in [0,1] regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is (1) with Gaussian noise (2)
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size())
###Output
_____no_output_____
###Markdown
Setting up the modelFor most GP regression models, you will need to construct the following GPyTorch objects:1. A **GP Model** (`gpytorch.models.ExactGP`) - This handles most of the inference.1. A **Likelihood** (`gpytorch.likelihoods.GaussianLikelihood`) - This is the most common likelihood used for GP regression.1. A **Mean** - This defines the prior mean of the GP.(If you don't know which mean to use, a `gpytorch.means.ConstantMean()` is a good place to start.)1. A **Kernel** - This defines the prior covariance of the GP.(If you don't know which kernel to use, a `gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())` is a good place to start).1. A **MultivariateNormal** Distribution (`gpytorch.distributions.MultivariateNormal`) - This is the object used to represent multivariate normal distributions. The GP Model The components of a user built (Exact, i.e. non-variational) GP model in GPyTorch are, broadly speaking:1. An `__init__` method that takes the training data and a likelihood, and constructs whatever objects are necessary for the model's `forward` method. This will most commonly include things like a mean module and a kernel module.2. A `forward` method that takes in some $n \times d$ data `x` and returns a `MultivariateNormal` with the *prior* mean and covariance evaluated at `x`. In other words, we return the vector $\mu(x)$ and the $n \times n$ matrix $K_{xx}$ representing the prior mean and covariance matrix of the GP. This specification leaves a large amount of flexibility when defining a model. For example, to compose two kernels via addition, you can either add the kernel modules directly:```pythonself.covar_module = ScaleKernel(RBFKernel() + WhiteNoiseKernel())```Or you can add the outputs of the kernel in the forward method:```pythoncovar_x = self.rbf_kernel_module(x) + self.white_noise_module(x)```
###Code
class ExactGPModel(ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = ConstantMean()
self.covar_module = ScaleKernel(RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
# initialize likelyhood and model
likelihood = GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
###Output
_____no_output_____
###Markdown
Training the modelThe most obvious difference here compared to many other GP implementations is that, as in standard PyTorch, the core training loop is written by the user. In GPyTorch, we make use of the standard PyTorch optimizers as from `torch.optim`, and all trainable parameters of the model should be of type `torch.nn.Parameter`. Because GP models directly extend `torch.nn.Module`, calls to methods like `model.parameters()` or `model.named_parameters()` function as you might expect coming from PyTorch.In most cases, the boilerplate code below will work well. It has the same basic components as the standard PyTorch training loop:1. Zero all parameter gradients2. Call the model and compute the loss3. Call backward on the loss to fill in gradients4. Take a step on the optimizerHowever, defining custom training loops allows for greater flexibility. For example, it is easy to save the parameters at each step of training, or use different learning rates for different parameters (which may be useful in deep kernel learning for example).
###Code
training_iter = 50
# fin optimal model hyperparameters
model.train()
likelihood.train()
optimizer = torch.optim.Adam([
{'params': model.parameters()}
], lr=0.1)
# loss for gps - the marginal log likelihood
mll = ExactMarginalLogLikelihood(likelihood, model)
for i in range(training_iter):
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
###Output
Iter 1/50 - Loss: 1.570 lengthscale: 0.693 noise: 0.693
Iter 2/50 - Loss: 1.549 lengthscale: 0.644 noise: 0.744
Iter 3/50 - Loss: 1.531 lengthscale: 0.598 noise: 0.798
Iter 4/50 - Loss: 1.518 lengthscale: 0.554 noise: 0.852
Iter 5/50 - Loss: 1.508 lengthscale: 0.513 noise: 0.906
Iter 6/50 - Loss: 1.501 lengthscale: 0.474 noise: 0.959
Iter 7/50 - Loss: 1.495 lengthscale: 0.437 noise: 1.010
Iter 8/50 - Loss: 1.492 lengthscale: 0.403 noise: 1.058
Iter 9/50 - Loss: 1.489 lengthscale: 0.370 noise: 1.102
Iter 10/50 - Loss: 1.488 lengthscale: 0.341 noise: 1.140
Iter 11/50 - Loss: 1.487 lengthscale: 0.313 noise: 1.173
Iter 12/50 - Loss: 1.487 lengthscale: 0.288 noise: 1.200
Iter 13/50 - Loss: 1.486 lengthscale: 0.266 noise: 1.220
Iter 14/50 - Loss: 1.486 lengthscale: 0.246 noise: 1.234
Iter 15/50 - Loss: 1.486 lengthscale: 0.228 noise: 1.242
Iter 16/50 - Loss: 1.486 lengthscale: 0.213 noise: 1.245
Iter 17/50 - Loss: 1.485 lengthscale: 0.200 noise: 1.243
Iter 18/50 - Loss: 1.485 lengthscale: 0.189 noise: 1.237
Iter 19/50 - Loss: 1.484 lengthscale: 0.180 noise: 1.226
Iter 20/50 - Loss: 1.484 lengthscale: 0.172 noise: 1.213
Iter 21/50 - Loss: 1.483 lengthscale: 0.166 noise: 1.196
Iter 22/50 - Loss: 1.482 lengthscale: 0.161 noise: 1.178
Iter 23/50 - Loss: 1.482 lengthscale: 0.158 noise: 1.158
Iter 24/50 - Loss: 1.481 lengthscale: 0.155 noise: 1.137
Iter 25/50 - Loss: 1.481 lengthscale: 0.154 noise: 1.116
Iter 26/50 - Loss: 1.480 lengthscale: 0.153 noise: 1.095
Iter 27/50 - Loss: 1.480 lengthscale: 0.153 noise: 1.075
Iter 28/50 - Loss: 1.480 lengthscale: 0.153 noise: 1.056
Iter 29/50 - Loss: 1.479 lengthscale: 0.154 noise: 1.038
Iter 30/50 - Loss: 1.479 lengthscale: 0.156 noise: 1.022
Iter 31/50 - Loss: 1.479 lengthscale: 0.159 noise: 1.008
Iter 32/50 - Loss: 1.479 lengthscale: 0.162 noise: 0.996
Iter 33/50 - Loss: 1.479 lengthscale: 0.165 noise: 0.987
Iter 34/50 - Loss: 1.479 lengthscale: 0.169 noise: 0.980
Iter 35/50 - Loss: 1.479 lengthscale: 0.173 noise: 0.976
Iter 36/50 - Loss: 1.479 lengthscale: 0.178 noise: 0.974
Iter 37/50 - Loss: 1.479 lengthscale: 0.183 noise: 0.975
Iter 38/50 - Loss: 1.479 lengthscale: 0.188 noise: 0.978
Iter 39/50 - Loss: 1.479 lengthscale: 0.193 noise: 0.982
Iter 40/50 - Loss: 1.479 lengthscale: 0.197 noise: 0.988
Iter 41/50 - Loss: 1.478 lengthscale: 0.202 noise: 0.996
Iter 42/50 - Loss: 1.478 lengthscale: 0.206 noise: 1.004
Iter 43/50 - Loss: 1.478 lengthscale: 0.209 noise: 1.013
Iter 44/50 - Loss: 1.478 lengthscale: 0.212 noise: 1.022
Iter 45/50 - Loss: 1.478 lengthscale: 0.214 noise: 1.031
Iter 46/50 - Loss: 1.478 lengthscale: 0.215 noise: 1.039
Iter 47/50 - Loss: 1.478 lengthscale: 0.216 noise: 1.047
Iter 48/50 - Loss: 1.478 lengthscale: 0.216 noise: 1.054
Iter 49/50 - Loss: 1.478 lengthscale: 0.215 noise: 1.060
Iter 50/50 - Loss: 1.478 lengthscale: 0.213 noise: 1.064
###Markdown
Make predictions with the modelJust as a user defined GP model returns a MultivariateNormal containing the prior mean and covariance from forward, a trained GP model in eval mode returns a MultivariateNormal containing the posterior mean and covariance. Thus, getting the predictive mean and variance, and then sampling functions from the GP at the given test points could be accomplished with calls like:
###Code
# f_preds = model(test_x)
# y_preds = likelihood(model(test_x))
# f_mean = f_preds.mean
# f_var = f_preds.variance
# f_covar = f_preds.covariance_matrix
# f_sample = f_preds.sample(sample_shape=torch.Size(1000,))
# get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# test points are regularly spaces along [0,1]
# make predictions by feeding model through likelihood
with torch.no_grad(), fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
###Output
_____no_output_____
###Markdown
Plot the model
###Code
with torch.no_grad():
f, ax = plt.subplots(1, 1, figsize=(4, 3))
lower, upper = observed_pred.confidence_region()
ax.plot(train_x.numpy(), train_y.numpy(), 'k*')
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
###Output
_____no_output_____ |
tensorflow/chapter_computational-performance/hybridize.ipynb | ###Markdown
编译器和解释器:label:`sec_hybridize`目前为止,本书主要关注的是*命令式编程*(imperative programming)。命令式编程使用诸如`print`、“`+`”和`if`之类的语句来更改程序的状态。考虑下面这段简单的命令式程序:
###Code
def add(a, b):
return a + b
def fancy_func(a, b, c, d):
e = add(a, b)
f = add(c, d)
g = add(e, f)
return g
print(fancy_func(1, 2, 3, 4))
###Output
10
###Markdown
Python是一种*解释型语言*(interpreted language)。因此,当对上面的`fancy_func`函数求值时,它按顺序执行函数体的操作。也就是说,它将通过对`e = add(a, b)`求值,并将结果存储为变量`e`,从而更改程序的状态。接下来的两个语句`f = add(c, d)`和`g = add(e, f)`也将执行类似地操作,即执行加法计算并将结果存储为变量。 :numref:`fig_compute_graph`说明了数据流。:label:`fig_compute_graph`尽管命令式编程很方便,但可能效率不高。一方面原因,Python会单独执行这三个函数的调用,而没有考虑`add`函数在`fancy_func`中被重复调用。如果在一个GPU(甚至多个GPU)上执行这些命令,那么Python解释器产生的开销可能会非常大。此外,它需要保存`e`和`f`的变量值,直到`fancy_func`中的所有语句都执行完毕。这是因为程序不知道在执行语句`e = add(a, b)`和`f = add(c, d)`之后,其他部分是否会使用变量`e`和`f`。 符号式编程考虑另一种选择*符号式编程*(symbolic programming),即代码通常只在完全定义了过程之后才执行计算。这个策略被多个深度学习框架使用,包括Theano和TensorFlow(后者已经获得了命令式编程的扩展)。一般包括以下步骤:1. 定义计算流程。1. 将流程编译成可执行的程序。1. 给定输入,调用编译好的程序执行。这将允许进行大量的优化。首先,在大多数情况下,我们可以跳过Python解释器。从而消除因为多个更快的GPU与单个CPU上的单个Python线程搭配使用时产生的性能瓶颈。其次,编译器可以将上述代码优化和重写为`print((1 + 2) + (3 + 4))`甚至`print(10)`。因为编译器在将其转换为机器指令之前可以看到完整的代码,所以这种优化是可以实现的。例如,只要某个变量不再需要,编译器就可以释放内存(或者从不分配内存),或者将代码转换为一个完全等价的片段。下面,我们将通过模拟命令式编程来进一步了解符号式编程的概念。
###Code
def add_():
return '''
def add(a, b):
return a + b
'''
def fancy_func_():
return '''
def fancy_func(a, b, c, d):
e = add(a, b)
f = add(c, d)
g = add(e, f)
return g
'''
def evoke_():
return add_() + fancy_func_() + 'print(fancy_func(1, 2, 3, 4))'
prog = evoke_()
print(prog)
y = compile(prog, '', 'exec')
exec(y)
###Output
def add(a, b):
return a + b
def fancy_func(a, b, c, d):
e = add(a, b)
f = add(c, d)
g = add(e, f)
return g
print(fancy_func(1, 2, 3, 4))
10
###Markdown
命令式(解释型)编程和符号式编程的区别如下:* 命令式编程更容易使用。在Python中,命令式编程的大部分代码都是简单易懂的。命令式编程也更容易调试,这是因为无论是获取和打印所有的中间变量值,或者使用Python的内置调试工具都更加简单。* 符号式编程运行效率更高,更易于移植。符号式编程更容易在编译期间优化代码,同时还能够将程序移植到与Python无关的格式中,从而允许程序在非Python环境中运行,避免了任何潜在的与Python解释器相关的性能问题。 混合式编程历史上,大部分深度学习框架都在命令式编程与符号式编程之间进行选择。例如,Theano、TensorFlow(灵感来自前者)、Keras和CNTK采用了符号式编程。相反地,Chainer和PyTorch采取了命令式编程。在后来的版本更新中,TensorFlow2.0和Keras增加了命令式编程。 命令式编程现在是TensorFlow2的默认选择,对于那些刚接触该语言的人来说是一个很好的改变。不过,符号式编程技术和计算图仍然存在于TensorFlow中,并且可以通过易于使用的装饰器`tf.function`进行访问。这为TensorFlow带来了命令式编程范式,允许用户定义更加直观的函数,然后使用被TensorFlow团队称为[autograph](https://www.tensorflow.org/api_docs/python/tf/autograph)的特性将它们封装,再自动编译成计算图。 `Sequential`的混合式编程要了解混合式编程的工作原理,最简单的方法是考虑具有多层的深层网络。按照惯例,Python解释器需要执行所有层的代码来生成一条指令,然后将该指令转发到CPU或GPU。对于单个的(快速的)计算设备,这不会导致任何重大问题。另一方面,如果我们使用先进的8-GPU服务器,比如AWS P3dn.24xlarge实例,Python将很难让所有的GPU都保持忙碌。在这里,瓶颈是单线程的Python解释器。让我们看看如何通过将`Sequential`替换为`HybridSequential`来解决代码中这个瓶颈。首先,我们定义一个简单的多层感知机。
###Code
import tensorflow as tf
from tensorflow.keras.layers import Dense
from d2l import tensorflow as d2l
# 生产网络的工厂模式
def get_net():
net = tf.keras.Sequential()
net.add(Dense(256, input_shape = (512,), activation = "relu"))
net.add(Dense(128, activation = "relu"))
net.add(Dense(2, activation = "linear"))
return net
x = tf.random.normal([1,512])
net = get_net()
net(x)
###Output
_____no_output_____
###Markdown
一开始,TensorFlow中构建的所有函数都是作为计算图构建的,因此默认情况下是JIT编译的。但是,随着TensorFlow2.X和EargeTensor的发布,计算图就不再是默认行为。我们可以使用tf.function重新启用这个功能。tf.function更常被用作函数装饰器,如下所示,它也可以直接将其作为普通的Python函数调用。模型的计算结果保持不变。
###Code
net = tf.function(net)
net(x)
###Output
_____no_output_____
###Markdown
我们编写与之前相同的代码,再使用`tf.function`简单地转换模型,当完成这些任务后,网络将以TensorFlow的MLIR中间表示形式构建为一个计算图,并在编译器级别进行大量优化以满足快速执行的需要(我们将在下面对性能进行基准测试)。通过将`jit_compile = True`标志添加到`tf.function()`的函数调用中可以显式地启用TensorFlow中的XLA(线性代数加速)功能。在某些情况下,XLA可以进一步优化JIT的编译代码。如果没有这种显式定义,图形模式将会被启用,但是XLA可以使某些大规模的线性代数的运算速度更快(与我们在深度学习应用程序中看到的操作类似),特别是在GPU环境中。 通过混合式编程加速为了证明通过编译获得了性能改进,我们比较了混合编程前后执行`net(x)`所需的时间。让我们先定义一个度量时间的类,它在本章中在衡量(和改进)模型性能时将非常有用。
###Code
#@save
class Benchmark:
"""用于测量运行时间"""
def __init__(self, description='Done'):
self.description = description
def __enter__(self):
self.timer = d2l.Timer()
return self
def __exit__(self, *args):
print(f'{self.description}: {self.timer.stop():.4f} sec')
###Output
_____no_output_____
###Markdown
现在我们可以调用网络三次,一次使用eager模式,一次是使用图模式,一次使用JIT编译的XLA。
###Code
net = get_net()
with Benchmark('Eager模式'):
for i in range(1000): net(x)
net = tf.function(net)
with Benchmark('Graph模式'):
for i in range(1000): net(x)
###Output
Eager模式: 1.3051 sec
###Markdown
如以上结果所示,在`tf.keras.Sequential`的实例被函数`tf.function`脚本化后,通过使用TensorFlow中的图模式执行方式实现的符号式编程提高了计算性能。 序列化 编译模型的好处之一是我们可以将模型及其参数序列化(保存)到磁盘。这允许这些训练好的模型部署到其他设备上,并且还能方便地使用其他前端编程语言。同时,通常编译模型的代码执行速度也比命令式编程更快。在TensorFlow中保存模型的底层API是`tf.saved_model`,让我们来看看`saved_model`的运行情况。
###Code
net = get_net()
tf.saved_model.save(net, 'my_mlp')
!ls -lh my_mlp*
###Output
INFO:tensorflow:Assets written to: my_mlp/assets
|
AAAI/Learnability/CIN/older/ds2/synthetic_type2_MLP2_m_500.ipynb | ###Markdown
Generate dataset
###Code
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
np.reshape(a,(2*m,1))
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_dataset).numpy() / m
y1 = np.array(labels)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("test dataset4")
test_dataset[0:10]/m
test_dataset = test_dataset/m
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
# class Whatnet(nn.Module):
# def __init__(self):
# super(Whatnet,self).__init__()
# self.linear1 = nn.Linear(2,3)
# # self.linear2 = nn.Linear(50,10)
# # self.linear3 = nn.Linear(10,3)
# torch.nn.init.xavier_normal_(self.linear1.weight)
# torch.nn.init.zeros_(self.linear1.bias)
# def forward(self,x):
# # x = F.relu(self.linear1(x))
# # x = F.relu(self.linear2(x))
# x = (self.linear1(x))
# return x
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,10)
self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
torch.nn.init.xavier_normal_(self.linear3.weight)
torch.nn.init.zeros_(self.linear3.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = (self.linear3(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____ |
examples/Digits_classification_using__sinnn.ipynb | ###Markdown
In this example, we will train a neural network using sinnn on the mnist digits dataset
###Code
!pip install sinnn
import tensorflow.keras.datasets.mnist as mnist
import numpy as np
from sinnn.Model import Model
from sinnn.Layers import Dense, ReLU
from sinnn.Losses import CrossEntropy
from sinnn.utils import save_model, load_model
###Output
_____no_output_____
###Markdown
Loading the dataset
###Code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
The flatten function is used to flatten a 2d image into 1d representation.
###Code
def flatten(array):
flat = []
for i in range(len(array)):
flat.append(array[i].flatten())
return np.array(flat)
x_train_flat, x_test_flat = flatten(x_train), flatten(x_test)
###Output
_____no_output_____
###Markdown
Display some images at random from the training set along with their labels.
###Code
for i in np.random.randint(len(x_train), size=12):
plt.title("Label: %i" % y_train[i])
plt.imshow(x_train[i], cmap='Blues')
plt.show()
###Output
_____no_output_____
###Markdown
A model object is instantiated and trained with CrossEntropy loss_function. We add a neural network with 100 neurons in the first hidden layer, followed by ReLU, followed by 200 neurons in second hidden layer, again followed by ReLu. The output layer contains 10 neurons each corresponding to a label. The activation of the output layer will predict labels.
###Code
model = Model(loss_function=CrossEntropy())
model.add(Dense(100), ReLU(), Dense(200), ReLU(), Dense(10))
model.train(x_train_flat, y_train, 32, 10, (x_test_flat, y_test), metrics=("loss", "accuracy"))
###Output
{'Epochs': 0, 'train_loss': 2.327499879611317, 'validation_loss': 2.32834553756797, 'train_accuracy': 0.10023333333333333, 'validation_accuracy': 0.1034}
{'Epochs': 1, 'train_loss': 0.13742208989431587, 'validation_loss': 0.15405078140559098, 'train_accuracy': 0.9575, 'validation_accuracy': 0.9529}
{'Epochs': 2, 'train_loss': 0.0902191965792777, 'validation_loss': 0.11845338411072721, 'train_accuracy': 0.9724666666666667, 'validation_accuracy': 0.9643}
{'Epochs': 3, 'train_loss': 0.07056076203364373, 'validation_loss': 0.10927901269074025, 'train_accuracy': 0.97845, 'validation_accuracy': 0.9666}
{'Epochs': 4, 'train_loss': 0.057076953228899524, 'validation_loss': 0.1043093214094062, 'train_accuracy': 0.9825166666666667, 'validation_accuracy': 0.9679}
{'Epochs': 5, 'train_loss': 0.04607063623189347, 'validation_loss': 0.10051429649378738, 'train_accuracy': 0.9858333333333333, 'validation_accuracy': 0.97}
{'Epochs': 6, 'train_loss': 0.04040050494743541, 'validation_loss': 0.10240804069806213, 'train_accuracy': 0.9871, 'validation_accuracy': 0.9712}
{'Epochs': 7, 'train_loss': 0.03511342955992402, 'validation_loss': 0.10401390517454151, 'train_accuracy': 0.9887333333333334, 'validation_accuracy': 0.9713}
{'Epochs': 8, 'train_loss': 0.030858388214574702, 'validation_loss': 0.10598119563522737, 'train_accuracy': 0.99, 'validation_accuracy': 0.9706}
{'Epochs': 9, 'train_loss': 0.027925971674904352, 'validation_loss': 0.10972325552191253, 'train_accuracy': 0.9910166666666667, 'validation_accuracy': 0.9719}
{'Epochs': 10, 'train_loss': 0.02473529972458869, 'validation_loss': 0.11216793326920708, 'train_accuracy': 0.9921666666666666, 'validation_accuracy': 0.9713}
###Markdown
After training the model we save the model immediately to prevent loss of data.
###Code
save_model(model)
###Output
_____no_output_____
###Markdown
Model is loaded and all metrics plotted.
###Code
import matplotlib.pyplot as plt
model = load_model()
for metric in model.train_log:
for key in model.train_log[metric]:
plt.plot(model.train_log[metric][key], label=key)
plt.xlabel('Epochs')
plt.ylabel(metric)
plt.legend()
plt.show()
###Output
_____no_output_____ |
student-notebooks/07.00-Protein-Docking.ipynb | ###Markdown
Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below:
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____ |
DeepBeliefNetwork.ipynb | ###Markdown
CSE2042 Machine Learning Deep Belief Network Uploading File:-
###Code
from google.colab import files
uploaded=files.upload()
###Output
_____no_output_____
###Markdown
Importing Packages:-
###Code
import tensorflow as tf
from sklearn.metrics import log_loss, accuracy_score
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Reading dataset:-
###Code
df= pd.read_csv('columba.csv')
###Output
_____no_output_____
###Markdown
**Defining Normalization Function :-**Min-Max Re-scaling: Shifting and squeezing a distribution to fit on a scale between 0 and 1. Min-Max Re-scaling is useful for comparing distributions with different scales or different shapes.
###Code
def normalize(x):
x = x.astype(float)
min = np.min(x)
max = np.max(x)
return (x - min)/(max-min)
###Output
_____no_output_____
###Markdown
**Defining Viewing Function :-**
###Code
def view_values(X, y, example):
label = y.loc[example]
image = X.loc[example,:].values.reshape([-1,1])
print(image)
###Output
_____no_output_____
###Markdown
**Dimensions of Dataset Used:-**
###Code
print("Shape of dataframe: ", df.shape) #train
df.describe()
###Output
_____no_output_____
###Markdown
Viewing Contents:-
###Code
df.head()
#More visual way to see the values in the layer
def view_gradient(X, y, example):
label = y.loc[example]
image = X.loc[example,:].values.reshape([14,1])
plt.title('Example: %d Label: %d' % (example, label))
plt.imshow(image, cmap=plt.get_cmap('gray'))
plt.show()
###Output
_____no_output_____
###Markdown
RBMRestricted Boltzmann Machine is an unsupervised learning algorithm.It has a visible layer of neurons that receives input data which is multiplied by some weights and added to a bias value at the hidden layer neuron to generate output. Then the output value generated at the hidden layer neuron will become a new input which is then multiplied with the same weights and then bias of the visible layer will be added to regenerate input. This process is called reconstruction or backward pass. Then the regenerated input will be compared with the original input if it matches or not. This process will keep on happening until the regenerated input is aligned with the original input.
###Code
# Define RBM class
class RBM(object):
def __init__(self, input_size, output_size,
learning_rate, epochs, batchsize):
# Define hyperparameters:used to control learning process
self._input_size = input_size
self._output_size = output_size
self.learning_rate = learning_rate # variables which determine how the network is trained
self.epochs = epochs #The number of epochs is a hyperparameter that defines the number times that the learning algorithm will work
#through the entire training dataset
self.batchsize = batchsize #batch size is a hyperparameter that defines the number of samples to work through
#before updating the internal model parameters.
# Initialize weights and biases using zero matrices
##np.zeros:-returns a new array of given shape and type
self.w = np.zeros([input_size, output_size], dtype=np.float32)
self.hb = np.zeros([output_size], dtype=np.float32)
self.vb = np.zeros([input_size], dtype=np.float32)
# forward pass, where h is the hidden layer and v is the visible layer
##Calculate the activation value of the Sigmoid() function from the visible layer to the hidden layer
def prob_h_given_v(self, visible, w, hb):
return tf.nn.sigmoid(tf.matmul(visible, w) + hb) #matmul() function returns the matrix product of two arrays
# backward pass
##Calculate the activation value of the Sigmoid() function from the hidden layer to the visible layer
def prob_v_given_h(self, hidden, w, vb):
return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)
# sampling function
##Sampling according to the given probability
def sample_prob(self, probs):
return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))
def train(self, X):
#A placeholder is simply a variable that we will assign data to at a later date
_w = tf.placeholder(tf.float32, [self._input_size, self._output_size])
_hb = tf.placeholder(tf.float32, [self._output_size])
_vb = tf.placeholder(tf.float32, [self._input_size])
prv_w = np.zeros([self._input_size, self._output_size], dtype=np.float32)
prv_hb = np.zeros([self._output_size], dtype=np.float32)
prv_vb = np.zeros([self._input_size], dtype=np.float32)
cur_w = np.zeros([self._input_size, self._output_size], dtype=np.float32)
cur_hb = np.zeros([self._output_size], dtype=np.float32)
cur_vb = np.zeros([self._input_size], dtype=np.float32)
##check
v0 = tf.placeholder(tf.float32, [None, self._input_size])
h0 = self.sample_prob(self.prob_h_given_v(v0, _w, _hb))
v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))
h1 = self.prob_h_given_v(v1, _w, _hb)
#To update the weights, we perform constrastive divergence.
positive_grad = tf.matmul(tf.transpose(v0), h0)
negative_grad = tf.matmul(tf.transpose(v1), h1)
# Calculate and update each parameter
update_w = _w + self.learning_rate * (positive_grad - negative_grad) / tf.to_float(tf.shape(v0)[0])
update_vb = _vb + self.learning_rate * tf.reduce_mean(v0 - v1, 0)
update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)
# We also define the error as the MSE
err = tf.reduce_mean(tf.square(v0 - v1))
error_list = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(self.epochs):
for start, end in zip(range(0, len(X), self.batchsize),range(self.batchsize,len(X), self.batchsize)):
batch = X[start:end]
cur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
prv_w = cur_w
prv_hb = cur_hb
prv_vb = cur_vb
error = sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})
print ('Epoch: %d' % epoch,'reconstruction error: %f' % error)
error_list.append(error)
self.w = prv_w
self.hb = prv_hb
self.vb = prv_vb
return error_list
#function to generate new features from the generative model that the RBM has learned
def rbm_output(self, X):
input_X = tf.constant(X)
_w = tf.constant(self.w)
_hb = tf.constant(self.hb)
_vb = tf.constant(self.vb)
out = tf.nn.sigmoid(tf.matmul(input_X, _w) + _hb)
hiddenGen = self.sample_prob(self.prob_h_given_v(input_X, _w, _hb))
visibleGen = self.sample_prob(self.prob_v_given_h(hiddenGen, _w, _vb))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
return sess.run(out), sess.run(visibleGen), sess.run(hiddenGen)
###Output
_____no_output_____
###Markdown
Training
###Code
#Droping unnecessary String columns
df=df.drop(['commitdate','transactionid'], axis=1)
###Output
_____no_output_____
###Markdown
Splitting and Normalizing dataframe
###Code
#split df
train_X = df.iloc[:,:-1].apply(func=normalize, axis=0)
train_Y = df.iloc[:,-1]
# df=df.drop(['transactionid'], axis=1)
print(df.head())
df.shape
inputX = df.iloc[:,:-1].apply(func=normalize, axis=0).values
inputY= df.iloc[:,-1].values
print(type(inputX))
inputX = inputX.astype(np.float32)
#List to hold RBMs
rbm_list = []
#define parameters of RBMs we will train
# 14-20-12-12-2
# def __init__(self, input_size, output_size,learning_rate, epochs, batchsize):
rbm_list.append(RBM(14, 20, 0.002, 200, 100))
rbm_list.append(RBM(20, 12, 0.002, 200, 100))
rbm_list.append(RBM(12, 12, 0.002, 200, 100))
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
outputList = []
error_list = []
#For each RBM in out list
for i in range(0, len(rbm_list)):
print('RBM', i+1)
#Train new RBM
rbm = rbm_list[i]
err = rbm.train(inputX)
error_list.append(err)
#Return output layer
#sess.run(out), sess.run(visibleGen), sess.run(hiddenGen)
outputX, reconstructedX, hiddenX = rbm.rbm_output(inputX)
outputList.append(outputX)
inputX= hiddenX
i = 1
for err in error_list:
print("RBM",i)
pd.Series(err).plot(logy=False)
plt.xlabel("Epoch")
plt.ylabel("Reconstruction Error")
plt.show()
i += 1
inputX = np.array(train_X)
inputX = inputX.astype(np.float32)
rbmOne = rbm_list[0]
print('RBM 1')
outputX_rbmOne, reconstructedX_rbmOne, hiddenX_rbmOne = rbmOne.rbm_output(inputX)
reconstructedX_rbmOne = pd.DataFrame(data=reconstructedX_rbmOne, index=train_X.index)
for j in range(0,1):
example = j
print("Data generated by First RBM Layer")
view_values(reconstructedX_rbmOne, train_Y, example)
print("Original Data")
view_values(train_X, train_Y, example)
reconstructedX_rbmOne.shape
###Output
_____no_output_____
###Markdown
DBNA Deep Belief Network (DBN) is a multi-layer generative graphical model. DBNs have bi-directional connections (RBM-type connections) on the top layer while the bottom layers only have top-down connections. They are trained using layerwise pre-training. Pre-training occurs by training the network component by component bottom up: treating the first two layers as an RBM and training, then treating the second layer and third layer as another RBM and training for those parameters.
###Code
class DBN(object):
def __init__(self, original_input_size, input_size, output_size,
learning_rate, epochs, batchsize, rbmOne, rbmTwo, rbmThree):
# Define hyperparameters
self._original_input_size = original_input_size
self._input_size = input_size
self._output_size = output_size
self.learning_rate = learning_rate
self.epochs = epochs
self.batchsize = batchsize
self.rbmOne = rbmOne
self.rbmTwo = rbmTwo
self.rbmThree = rbmThree
self.w = np.zeros([input_size, output_size], "float")
self.hb = np.zeros([output_size], "float")
self.vb = np.zeros([input_size], "float")
def prob_h_given_v(self, visible, w, hb):
return tf.nn.sigmoid(tf.matmul(visible, w) + hb)
def prob_v_given_h(self, hidden, w, vb):
return tf.nn.sigmoid(tf.matmul(hidden, tf.transpose(w)) + vb)
def sample_prob(self, probs):
return tf.nn.relu(tf.sign(probs - tf.random_uniform(tf.shape(probs))))
def train(self, X):
_w = tf.placeholder("float", [self._input_size, self._output_size])
_hb = tf.placeholder("float", [self._output_size])
_vb = tf.placeholder("float", [self._input_size])
prv_w = np.zeros([self._input_size, self._output_size], "float")
prv_hb = np.zeros([self._output_size], "float")
prv_vb = np.zeros([self._input_size], "float")
cur_w = np.zeros([self._input_size, self._output_size], "float")
cur_hb = np.zeros([self._output_size], "float")
cur_vb = np.zeros([self._input_size], "float")
v0 = tf.placeholder("float", [None, self._original_input_size])
forwardOne = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(v0, self.rbmOne.w) + self.rbmOne.hb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul(v0, self.rbmOne.w) + self.rbmOne.hb)))))
forwardTwo = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(forwardOne, self.rbmTwo.w) + self.rbmTwo.hb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul(forwardOne, self.rbmTwo.w) + self.rbmTwo.hb)))))
forward = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(forwardTwo, self.rbmThree.w) + self.rbmThree.hb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul( forwardTwo, self.rbmThree.w) + self.rbmThree.hb)))))
h0 = self.sample_prob(self.prob_h_given_v(forward, _w, _hb))
v1 = self.sample_prob(self.prob_v_given_h(h0, _w, _vb))
h1 = self.prob_h_given_v(v1, _w, _hb)
positive_grad = tf.matmul(tf.transpose(forward), h0)
negative_grad = tf.matmul(tf.transpose(v1), h1)
update_w = _w + self.learning_rate * (positive_grad - negative_grad) / tf.to_float(tf.shape(forward)[0])
update_vb = _vb + self.learning_rate * tf.reduce_mean(forward - v1, 0)
update_hb = _hb + self.learning_rate * tf.reduce_mean(h0 - h1, 0)
backwardOne = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(v1, self.rbmThree.w.T) + self.rbmThree.vb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul(v1, self.rbmThree.w.T) + self.rbmThree.vb)))))
backwardTwo = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(backwardOne, self.rbmTwo.w.T) + self.rbmTwo.vb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul(backwardOne, self.rbmTwo.w.T) + self.rbmTwo.vb)))))
backward = tf.nn.relu(tf.sign(tf.nn.sigmoid(tf.matmul(backwardTwo, self.rbmOne.w.T) + self.rbmOne.vb) - tf.random_uniform(tf.shape(tf.nn.sigmoid(tf.matmul(backwardTwo, self.rbmOne.w.T) + self.rbmOne.vb)))))
err = tf.reduce_mean(tf.square(v0 - backward))
error_list = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(self.epochs):
for start, end in zip(range(0, len(X), self.batchsize), range(self.batchsize,len(X), self.batchsize)):
batch = X[start:end]
cur_w = sess.run(update_w, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_hb = sess.run(update_hb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
cur_vb = sess.run(update_vb, feed_dict={v0: batch, _w: prv_w, _hb: prv_hb, _vb: prv_vb})
prv_w = cur_w
prv_hb = cur_hb
prv_vb = cur_vb
error = sess.run(err, feed_dict={v0: X, _w: cur_w, _vb: cur_vb, _hb: cur_hb})
print ('Epoch: %d' % (epoch+1),'reconstruction error: %f' % error)
error_list.append(error)
self.w = prv_w
self.hb = prv_hb
self.vb = prv_vb
return error_list
def dbn_output(self, X):
input_X = tf.constant(X)
forwardOne = tf.nn.sigmoid(tf.matmul(input_X, self.rbmOne.w) + self.rbmOne.hb)
forwardTwo = tf.nn.sigmoid(tf.matmul(forwardOne, self.rbmTwo.w) + self.rbmTwo.hb)
forward = tf.nn.sigmoid(tf.matmul(forwardTwo, self.rbmThree.w) + self.rbmThree.hb)
_w = tf.constant(self.w)
_hb = tf.constant(self.hb)
_vb = tf.constant(self.vb)
out = tf.nn.sigmoid(tf.matmul(forward, _w) + _hb)
hiddenGen = self.sample_prob(self.prob_h_given_v(forward, _w, _hb))
visibleGen = self.sample_prob(self.prob_v_given_h(hiddenGen, _w, _vb))
backwardTwo = tf.nn.sigmoid(tf.matmul(visibleGen, self.rbmThree.w.T) + self.rbmThree.vb)
backwardOne = tf.nn.sigmoid(tf.matmul(backwardTwo, self.rbmTwo.w.T) + self.rbmTwo.vb)
backward = tf.nn.sigmoid(tf.matmul(backwardOne, self.rbmOne.w.T) + self.rbmOne.vb)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
return sess.run(out), sess.run(backward)
###Output
_____no_output_____
###Markdown
def __init__(self, original_input_size, input_size, output_size, learning_rate, epochs, batchsize, rbmOne, rbmTwo, rbmThree):
###Code
dbn = DBN(14, 12, 12, 0.02, 50, 100, rbm_list[0], rbm_list[1], rbm_list[2])
inputX = np.array(inputX)
error_list = []
error_list = dbn.train(inputX)
print("DBN")
pd.Series(error_list).plot(logy=False)
plt.xlabel("Epoch")
plt.ylabel("Reconstruction Error")
plt.show()
train_X.shape
train_Y.head
###Output
_____no_output_____
###Markdown
TESTING
###Code
train_X.shape
# train_Y.shape
print('DBN 1')
outputX_dbn, reconstructedX_dbn = dbn.dbn_output(inputX)
###Output
DBN 1
###Markdown
**Classifier**(Logistic) Used outputX_dbn as the Input for the Classifer
###Code
Y_Train = train_Y.iloc[:4000].values
Y_Train.shape
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
def sigmoid(x):
return 1.0/(1.0 + np.exp(-x))
def hypothesis(X, theta) :
#returns the dot product of vectors X and theta
return sigmoid(np.dot(X, theta))
def error(X,y,theta):
hi = hypothesis(X,theta)
error= -1* np.mean ( y * np.log(hi) + ( ( 1 - y ) * (np.log( 1 - hi )) ) )
return error
def gradient(X,y,theta):
hi = hypothesis(X,theta)
#X.T :transpose of X
grad = np.dot(X.T,(y-hi))
m=X.shape[0]
return grad/m
def gradient_descent(X,y,lr=0.02,max_itr=500):
n=X.shape[1]
theta = np.zeros((n,1))
error_list= []
for i in range(max_itr):
err = error(X,y,theta)
error_list.append(err)
grad = gradient(X,y,theta)
#update theta
theta = theta + lr * grad
return (theta, error_list)
ones = np.ones((outputX_dbn.shape[0],1))
X_New_Train = np.hstack((ones,outputX_dbn))
X_New_Train = X_New_Train[:4000,:]
Y_Train= Y_Train.reshape((-1,1))
Y_Train.shape
theta, error_list = gradient_descent(X_New_Train, Y_Train)
plt.plot(error_list)
theta.shape
print(theta)
def predict(X,theta):
h = hypothesis(X, theta)
output = np.zeros(h.shape)
output[h>=0.5] = 1
output = output.astype('int')
return output
XT_preds = predict(X_New_Train,theta)
###Output
_____no_output_____
###Markdown
Predict
###Code
print(XT_preds)
def accuracy(actual, preds):
actual = actual.astype('int')
actual = actual.reshape((-1,1))
acc= np.sum(actual==preds)/actual.shape[0]
return acc*100
###Output
_____no_output_____
###Markdown
Accuracy
###Code
train_acc= accuracy(Y_Train,XT_preds)
print(train_acc)
###Output
70.0
###Markdown
Evaluation Metrics
###Code
from sklearn.metrics import f1_score
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
print(f1_score(Y_Train,XT_preds,average='micro'))
###Output
0.7
|
ja/08_network_reliability.ipynb | ###Markdown
ネットワーク信頼性の計算この章ではGraphillionの機能を用いて情報通信ネットワークの**故障しにくさ**を評価する手法を紹介します. 情報通信ネットワークと信頼性情報通信ネットワークとは,端末間にリンクをつなぐことによって端末間の通信を可能とするネットワークのことです.通信ネットワークは端末を頂点,リンクを辺とするグラフとしてモデル化することができます.情報通信ネットワークは人々の生活を支える重要なインフラです.そのため,故障せずに動作し続けることが求められます.**ネットワーク信頼性**とは,あるネットワークがどれだけ故障に強いかを表す尺度のことです.情報通信ネットワークのネットワーク信頼性を知ることは,より故障に強いネットワークを実現するために役立ちます.ネットワーク信頼性の定義について説明します.ネットワーク信頼性とは,ネットワークを構成するリンクが確率的に故障するとしたときに,特定の端点間で通信ができる確率として定義されます.具体例を使って説明しましょう.左のシンプルなネットワークの信頼性を考えます.いま,ネットワークの各リンクがある一日のうちに故障する確率は5%であるとします.つまり,100日のうち5日はリンクが使えなくなるとします.このネットワークを用いて通信が行える確率はリンクが利用できる確率と等しいので,95%になります.この95%という値がネットワーク信頼性となります.次に図2のネットワークのネットワーク信頼性を求めてみましょう.各リンクの故障率は5%であるとします.一つ前の例とは異なり,今回は単一のリンク故障があったとしても端点をつなぐ別経路が存在するため,変わらず通信が可能です.このネットワークのネットワーク信頼性は通信可能なリンク故障の組合せを全て調べることで計算できます.通信可能な故障は以下の5通りになります.これらの故障について,その故障パターンが発生する確率を計算し,それらを足し合わせると,通信可能な確率,つまりネットワーク信頼性を算出することができます.実際に計算してみると,このネットワークの信頼性は99.5%となります.この例では2つの頂点間で通信可能となる確率を計算しましたが,他にもK個の頂点や,全ての頂点で通信可能である確率をネットワーク信頼性とよぶこともあります. Graphillionによるネットワーク信頼性の計算ネットワーク信頼性を計算するためには指数種類ある故障パターンを調べ上げる必要があります.そのため,ネットワークの信頼性の計算も,これまで紹介してきたグラフの問題と同様,ネットワークの大きさに対して指数的に時間がかかる問題として知られています.また,この問題は一つだけよい組合せを見つけられればよい最適化問題とも異なるため,組合せ最適化のための既存ソフトウェアも利用できません.Graphillionを用いることでネットワーク信頼性を正確に計算することができます.`GraphSet`モジュールには,グラフの各辺が確率的に選ばれたときにグラフ集合`gs` に含まれるグラフが得られる確率を計算する, `gs.probability`というメソッドが実装されています.ネットワーク信頼性は特定の頂点がすべて接続されるグラフが得られる確率のことです.そのため,グラフ集合を作成して,`gs.probability`を実行することで信頼性を計算することができます. Graphilliionを用いてネットワーク信頼性を計算する例を紹介します.今回は以下のグリッドグラフ用いて,全ての頂点間で通信可能となる確率を求めます.なお,各辺は5%で故障するとします.
###Code
!pip install graphillion
!git clone https://github.com/nsnmsak/graphillion_tutorial
!cp graphillion_tutorial/ja/tutorial_util.py .
from graphillion import GraphSet, tutorial
from tutorial_util import draw_subgraph, draw_universe
univ = tutorial.grid(4, 4)
GraphSet.set_universe(univ)
draw_universe()
###Output
_____no_output_____
###Markdown
次に`GraphSet`オブジェクトを作成します.全ての頂点で通信可能である部分グラフは全域木を含む部分グラフに相当します.そのようなグラフの集合を表す`GraphSet`オブジェクトは以下のように求めることができます.
###Code
spanning_trees = GraphSet.trees(1, is_spanning=True)
all_subgraphs = GraphSet({})
gs = all_subgraphs.supergraphs(spanning_trees)
###Output
_____no_output_____
###Markdown
いくつかグラフを取り出してみると,全ての頂点が連結となっていることが分かります.
###Code
iterator = gs.rand_iter()
draw_subgraph(next(iterator))
draw_subgraph(next(iterator))
###Output
_____no_output_____
###Markdown
あとは故障確率を設定して,`gs.probability`メソッドを実行することで信頼性を計算できます.
###Code
probs = {edge: 0.95 for edge in univ}
gs.probability(probs)
###Output
_____no_output_____
###Markdown
ネットワーク信頼性の計算この章ではGraphillionの機能を用いて情報通信ネットワークの**故障しにくさ**を評価する手法を紹介します. 情報通信ネットワークと信頼性情報通信ネットワークとは,端末間にリンクをつなぐことによって端末間の通信を可能とするネットワークのことです.通信ネットワークは端末を頂点,リンクを辺とするグラフとしてモデル化することができます.情報通信ネットワークは人々の生活を支える重要なインフラです.そのため,故障せずに動作し続けることが求められます.**ネットワーク信頼性**とは,あるネットワークがどれだけ故障に強いかを表す尺度のことです.情報通信ネットワークのネットワーク信頼性を知ることは,より故障に強いネットワークを実現するために役立ちます.ネットワーク信頼性の定義について説明します.ネットワーク信頼性とは,ネットワークを構成するリンクが確率的に故障するとしたときに,特定の端点間で通信ができる確率として定義されます.具体例を使って説明しましょう.左のシンプルなネットワークの信頼性を考えます.いま,ネットワークの各リンクがある一日のうちに故障する確率は1%であるとします.つまり,100日のうち1日はリンクが使えなくなるとします.このネットワークを用いて通信が行える確率はリンクが利用できる確率と等しいので,99%になります.この99%という値がネットワーク信頼性となります.次に図2のネットワークのネットワーク信頼性を求めてみましょう.各リンクの故障率は1%であるとします.一つ前の例とは異なり,今回は単一のリンク故障があったとしても端点をつなぐ別経路が存在するため,変わらず通信が可能です.このネットワークのネットワーク信頼性は通信可能なリンク故障の組合せを全て調べることで計算できます.通信可能な故障は以下の5通りになります.これらの故障について,その故障パターンが発生する確率を計算し,それらを足し合わせると,通信可能な確率,つまりネットワーク信頼性を算出することができます.実際に計算してみると,このネットワークの信頼性は98.1%となります.この例では2つの頂点間で通信可能となる確率を計算しましたが,他にもK個の頂点や,全ての頂点で通信可能である確率をネットワーク信頼性とよぶこともあります. Graphillionによるネットワーク信頼性の計算ネットワーク信頼性を計算するためには指数種類ある故障パターンを調べ上げる必要があります.そのため,ネットワークの信頼性の計算も,これまで紹介してきたグラフの問題と同様,ネットワークの大きさに対して指数的に時間がかかる問題として知られています.また,この問題は一つだけよい組合せを見つけられればよい最適化問題とも異なるため,組合せ最適化のための既存ソフトウェアも利用できません.Graphillionを用いることでネットワーク信頼性を正確に計算することができます.`GraphSet`モジュールには,グラフの各辺が確率的に選ばれたときにグラフ集合`gs` に含まれるグラフが得られる確率を計算する, `gs.probability`というメソッドが実装されていました.ネットワーク信頼性は特定の頂点がすべて接続されるグラフが得られる確率と言い換えることができます.ことです.そのため,そのようなグラフ集合を作成して,`gs.probability`を実行することで信頼性を計算することができます. Graphilliionを用いてネットワーク信頼性を計算する例を紹介します.今回は以下のグリッドグラフ用いて,全ての頂点間で通信可能となる確率を求めます.なお,各辺は5%で故障するとします.
###Code
!pip install graphillion
!git clone https://github.com/nsnmsak/graphillion_tutorial
!cp graphillion_tutorial/ja/tutorial_util.py .
from graphillion import GraphSet, tutorial
from tutorial_util import draw_subgraph, draw_universe
univ = tutorial.grid(4, 4)
GraphSet.set_universe(univ)
draw_universe()
###Output
_____no_output_____
###Markdown
次に`GraphSet`オブジェクトを作成します.全ての頂点で通信可能である部分グラフは全域木を含む部分グラフに相当します.そのようなグラフの集合を表す`GraphSet`オブジェクトは以下のように求めることができます.
###Code
spanning_trees = GraphSet.trees(1, is_spanning=True)
all_subgraphs = GraphSet({})
gs = all_subgraphs.supergraphs(spanning_trees)
###Output
_____no_output_____
###Markdown
いくつかグラフを取り出してみると,全ての頂点が連結となっていることが分かります.
###Code
iterator = gs.rand_iter()
draw_subgraph(next(iterator))
draw_subgraph(next(iterator))
###Output
_____no_output_____
###Markdown
あとは故障確率を設定して,`gs.probability`メソッドを実行することで信頼性を計算できます.
###Code
probs = {edge: 0.95 for edge in univ}
gs.probability(probs)
###Output
_____no_output_____ |
Sequence.ipynb | ###Markdown
Cartesian product using a list comprehension
###Code
colors = ['black', 'white']
sizes = ['S', 'M', 'L']
tshirts = [(color, size) for color in colors for size in sizes]
print(tshirts)
###Output
[('black', 'S'), ('black', 'M'), ('black', 'L'), ('white', 'S'), ('white', 'M'), ('white', 'L')]
###Markdown
Cartesian product in a generator expression
###Code
colors = ['black', 'white']
sizes = ['S', 'M', 'L']
for tshirt in ('%s %s' % (c, s) for c in colors for s in sizes):
print(tshirt)
###Output
black S
black M
black L
white S
white M
white L
###Markdown
Initializing a tuple and an array from a generator expression
###Code
symbols = '$¢£¥€¤'
print(tuple(ord(symbol) for symbol in symbols))
import array
print(array.array('I', (ord(symbol) for symbol in symbols)))
###Output
(36, 162, 163, 165, 8364, 164)
array('I', [36, 162, 163, 165, 8364, 164])
###Markdown
Get file name
###Code
# the os.path.split() function builds a tuple (path, last_part) from a filesystem path
import os
_, filename = os.path.split('/home/luciano/.ssh/idrsa.pub')
filename
###Output
_____no_output_____
###Markdown
This script demonstrates the use of multiple classifiers for (restricted) mnist number sequences in Tensorflow on an extremly simple CNN We import: keras for getting the mnist data to_categorical to encode the labels to One Hot format tensorflow numpy for creating the number sequences os matplotlib
###Code
from keras.datasets import mnist
from keras.utils import to_categorical
import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
We want to read in sequences of numbers. In order to achieve this we horizontally concat number up to sequence length 5
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
def concat_images(data, label, number_of_images):
images = []
labels = []
for i in range(number_of_images):
#First get a random number between 1 and 5 to indicate how long the seq will be
seq_len = np.random.randint(1, 6)
# Then get seq_len images and concat them to one sequence
rnd_ind = np.random.randint(0, len(data), seq_len)
#Create a new concatted image
image = data[rnd_ind[0]]
temp_label = label[rnd_ind[0]]
#Stack the images and labels
for i in rnd_ind[1:]:
image = np.hstack((image, data[i]))
temp_label = np.vstack((temp_label, label[i]))
# pad the image with zeros so all have the same size
image = np.pad(image, [(0,0),(0,140-image.shape[1])], 'constant')
#Pad labels with 10 as a NAN indicator
label_padded = np.pad(label[rnd_ind], [(0), (5-len(rnd_ind))], 'constant', constant_values = 10)
labels.append(label_padded)
images.append(image)
return np.array(images), np.array(labels)
train_data, train_labels = concat_images(X_train, y_train, 100000)
test_data, test_labels = concat_images(X_test, y_test, 32000)
###Output
_____no_output_____
###Markdown
Preprocess the data (reshape to correct format for grayscale + standartization). Create one Hot-Hot Encodings
###Code
train_data_preprocessed = train_data.reshape(train_data.shape[0], train_data.shape[1], train_data.shape[2],1)
test_data_preprocessed = test_data.reshape(test_data.shape[0], test_data.shape[1], test_data.shape[2],1)
#Standartize data
train_data_preprocessed = train_data_preprocessed / 255
test_data_preprocessed = test_data_preprocessed / 255
#OHE labels
train_labels_OHE = to_categorical(train_labels,11)
test_labels_OHE = to_categorical(test_labels,11)
#Look at some sample data with labels
plt.figure(0)
for i in range(3):
plt.imshow(train_data_preprocessed[i].reshape(28,140), cmap="gray")
plt.title(train_labels[i])
plt.figure(i+1)
###Output
_____no_output_____
###Markdown
Define accuracy function for evaluation and next_batch function for training
###Code
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 2) == labels)
/ predictions.shape[1] / predictions.shape[0])
def next_batch(data, label, batch_size):
rnd_ind = np.random.randint(0, len(data), batch_size)
return data[rnd_ind], label[rnd_ind]
###Output
_____no_output_____
###Markdown
Create the model 2 Convolutions followed by MaxPooling 1024 unit Dense layer followed by softmax To keep it simple there is no visualization nor storing of the model and its parameters
###Code
#Create a model with 5 classifiers
graph = tf.Graph()
with graph.as_default():
data = tf.placeholder(dtype=tf.float32,shape=(None, 28,140,1))
labels = tf.placeholder(dtype=tf.float32, shape=(None, 5, 11))
w1 = tf.Variable(tf.truncated_normal(shape=(3,3, 1,32), stddev=0.1))
b1 = tf.Variable(tf.zeros(32))
w2 = tf.Variable(tf.truncated_normal(shape=(3,3,32,64), stddev=0.1))
b2 = tf.Variable(tf.constant(1., shape=[64]))
w3 = tf.Variable(tf.truncated_normal(shape=(28 // 4 * 140 // 4 * 64, 1024)))
b3 = tf.Variable(tf.constant(1., shape=[1024]))
w4 = tf.Variable(tf.truncated_normal(shape=(1024,11)))
b4 = tf.Variable(tf.constant(1., shape=[11]))
w5 = tf.Variable(tf.truncated_normal(shape=(1024,11)))
b5 = tf.Variable(tf.constant(1., shape=[11]))
w6 = tf.Variable(tf.truncated_normal(shape=(1024,11)))
b6 = tf.Variable(tf.constant(1., shape=[11]))
w7 = tf.Variable(tf.truncated_normal(shape=(1024,11)))
b7 = tf.Variable(tf.constant(1., shape=[11]))
w8 = tf.Variable(tf.truncated_normal(shape=(1024,11)))
b8 = tf.Variable(tf.constant(1., shape=[11]))
def model(x, w, b):
conv= tf.nn.relu(tf.nn.conv2d(x, w1, [1,1,1,1], padding="SAME")+b1)
conv = tf.nn.max_pool(conv, [1,2,2,1], [1,2,2,1], padding="SAME")
conv = tf.nn.relu(tf.nn.conv2d(conv, w2, [1,1,1,1], padding="SAME")+b2)
conv = tf.nn.max_pool(conv, [1,2,2,1], [1,2,2,1],padding="SAME")
shape = conv.get_shape().as_list()
reshape = tf.reshape(conv, [-1, shape[1] * shape[2] * shape[3]])
dense = tf.nn.relu(tf.matmul(reshape, w3)+b3)
return tf.matmul(dense, w) + b
pred = []
pred.append(model(data, w4, b4))
pred.append(model(data, w5, b5))
pred.append(model(data, w6, b6))
pred.append(model(data, w7, b7))
pred.append(model(data, w8,b8))
pred = tf.stack(pred, axis=1)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=labels))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
prediction = tf.nn.softmax(pred)
init = tf.global_variables_initializer()
sess = tf.Session(graph=graph)
sess.run(init)
for step in range(30000):
data_batch, label_batch = next_batch(train_data_preprocessed, train_labels_OHE, 24)
_,actual_loss = sess.run([optimizer, loss], feed_dict={data : data_batch, labels : label_batch})
if step % 1000 == 0:
print("Step {}, Loss {}".format(step, actual_loss))
train_eval = sess.run(prediction,feed_dict = {data : train_data_preprocessed[:2048]})
print("Train ACC: {}".format(accuracy(train_eval, train_labels[:2048])))
test_eval = sess.run(prediction, feed_dict = {data : test_data_preprocessed[:1024]})
print("Test Acc: {}".format(accuracy(test_eval, test_labels[:1024])))
rnd_ind = np.random.randint(0, len(test_data_preprocessed), 5)
show_images = test_data_preprocessed[rnd_ind]
pred = sess.run(prediction, feed_dict={data : show_images})
for it,img in enumerate(show_images):
plt.figure()
plt.imshow(img.reshape(28,140))
plt.title("Predicted: %s"%(pred[it].argmax(1)))
###Output
_____no_output_____ |
python/SavePytorchModel.ipynb | ###Markdown
See: https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html https://pytorch.org/docs/stable/jit.html
###Code
import torch
import torch.nn as nn
import numpy as np
from collections import OrderedDict
print(torch.__version__)
print(np.__version__)
from dcgan3D import DCGAN_G
###Output
_____no_output_____
###Markdown
Just save model w/o checkpoint The generator needs two parameters, `ngf` and `latent_dim`. Following the setup for https://github.com/FLC-QU-hep/getting_high, they are set to:
###Code
ngf = 32
latent_dim = 100
generator = DCGAN_G(ngf, latent_dim)
script_model = torch.jit.script(generator)
print(script_model)
###Output
RecursiveScriptModule(
original_name=DCGAN_G
(conv1_1): RecursiveScriptModule(original_name=ConvTranspose3d)
(conv1_100): RecursiveScriptModule(original_name=ConvTranspose3d)
(main_conv): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=ConvTranspose3d)
(1): RecursiveScriptModule(original_name=LayerNorm)
(2): RecursiveScriptModule(original_name=ReLU)
(3): RecursiveScriptModule(original_name=ConvTranspose3d)
(4): RecursiveScriptModule(original_name=LayerNorm)
(5): RecursiveScriptModule(original_name=ReLU)
(6): RecursiveScriptModule(original_name=ConvTranspose3d)
(7): RecursiveScriptModule(original_name=LayerNorm)
(8): RecursiveScriptModule(original_name=ReLU)
(9): RecursiveScriptModule(original_name=ConvTranspose3d)
(10): RecursiveScriptModule(original_name=ReLU)
)
)
###Markdown
This yields us a TorchScript Model object that contains the structure of the model that would be readable both by C++ and by Python. So we can save the model.
###Code
script_model.save("./gan_wo_checkpoint.pt")
###Output
_____no_output_____
###Markdown
We even have access to a more code-like version of the forward function defined for our dcgan
###Code
print(script_model.code)
###Output
def forward(self,
noise: Tensor,
energy: Tensor) -> Tensor:
energy_trans = (self.conv1_1).forward(energy, None, )
noise_trans = (self.conv1_100).forward(noise, None, )
input = torch.cat([energy_trans, noise_trans], 1)
x = (self.main_conv).forward(input, )
return torch.view(x, [-1, 30, 30, 30])
|
Seminars/Seminar_2/Seminar.ipynb | ###Markdown
SKLearn (почти) каждый класс в SKLearn имеет следующие методы:
###Code
class KNN():
def __init__(self, n_neighbors=5, p=2, metric='minkowski'):
<your code>
def fit(self, X_train, y_train):
<your code>
def predict(self, X_test):
<your code>
def predict_proba(self, X_test):
<your code>
###Output
_____no_output_____
###Markdown
(у регрессий нет predict_proba, есть только predict)
###Code
from sklearn.neighbors import KNeighborsClassifier
my_knn = KNN(k=<choose your favourite>)
sklearn_knn = KNeighborsClassifier(k=<choose your favourite>)
###Output
_____no_output_____
###Markdown
House pricing https://www.kaggle.com/c/house-prices-advanced-regression-techniques
###Code
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
diabetes_X = diabetes.data
diabetes_Y = diabetes.target
diabetes.DESCR.split('\n')
X_df = pd.DataFrame(diabetes_X, columns=diabetes.feature_names)
X_df.head()
###Output
_____no_output_____
###Markdown
Посмотрим на корреляции признаков между собой:
###Code
import matplotlib.pyplot as plt
plt.subplots(figsize=(10,10))
sns.heatmap(X_df.corr(), square=True)
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regressionhttp://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
lr = LinearRegression()
np.mean(cross_val_score(lr, X_train_sc, diabetes_Y, scoring='neg_mean_squared_error', cv=5))
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(diabetes_X, diabetes_Y, test_size=0.2)
lr = LinearRegression().fit(x_train, y_train)
lr.predict(x_test).ravel().astype(int)
np.array(y_test).ravel()
###Output
_____no_output_____
###Markdown
Посмотрим, какие коэффициенты получились у линейной регрессии:
###Code
lr.fit(x_train, y_train)
print(X_df.columns)
print(lr.coef_)
importances = np.abs(lr.coef_)
indices = np.argsort(importances)[::-1]
columns_num = len(X_df.columns)
plt.figure(figsize=(20, 8))
plt.title("Feature importances")
plt.bar(range(columns_num), importances[indices],
color="r", align="center")
plt.xticks(range(columns_num), X_df.columns[indices])
plt.xlim([-1, columns_num])
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
To-Do
###Code
<Попробуйте применить Lasso и Ridge регрессии к нашему набору данных>
###Output
_____no_output_____
###Markdown
Посмотрим также на корреляцию признаков: Предсказание сердечно-сосудистых заболеваний https://mlbootcamp.ru/round/12/sandbox/
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
train_data.head()
X_train, Y_train = np.array(train_data.drop("cardio", axis=1).drop("id", axis=1)), np.array(train_data['cardio'])
from sklearn.ensemble import RandomForestClassifier
rf = <random forest>
<cross-val with accuracy>
<fit random forest>
importances = rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(20, 8))
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), train_data.drop(["cardio", "id"], axis=1).columns[indices])
plt.xlim([-1, X_train.shape[1]])
plt.show()
from sklearn.grid_search import GridSearchCV
rf = RandomForestClassifier(max_depth=3)
params = {
'max_depth': [3, 10, 50, None],
'min_samples_split': [2, 3, 4]
}
gsv = GridSearchCV(estimator=rf, param_grid=params, scoring='accuracy', cv=3, verbose=1)
gsv.fit(X_train, Y_train)
print(gsv.best_params_, gsv.best_score_)
gsv.grid_scores_
###Output
_____no_output_____
###Markdown
XGBoost
###Code
<do the same for GradientBoostingClassifier and other eatimators>
###Output
_____no_output_____
###Markdown
Model Ensembles Мы можем предсказывать не класс, а вероятности классов:
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
X_train, Y_train = np.array(train_data.drop("cardio", axis=1)), np.array(train_data['cardio'])
X_train, X_test, Y_train, Y_test = train_test_split(X_train, Y_train, test_size=0.15, random_state=37)
rf = RandomForestClassifier(n_estimators=100).fit(X_train, Y_train)
pred = rf.predict_proba(X_test)
pred
estimator1 = <choose your favourite>
estimator2 = <choose your second favourite>
estimator1.fit(X_train, Y_train)
estimator2.fit(X_train, Y_train)
pred1 = estimator1.predict_proba(X_test)[:, 1]
pred2 = estimator2.predict_proba(X_test)[:, 1]
pred = <compute average of all the estimators>
###Output
_____no_output_____
###Markdown
SKLearn (почти) каждый класс в SKLearn имеет следующие методы:
###Code
class KNN():
def __init__(self, n_neighbors=5, p=2, metric='minkowski'):
<your code>
def fit(self, X_train, y_train):
<your code>
def predict(self, X_test):
<your code>
def predict_proba(self, X_test):
<your code>
###Output
_____no_output_____
###Markdown
(у регрессий нет predict_proba, есть только predict)
###Code
from sklearn.neighbors import KNeighborsClassifier
my_knn = KNN(k=<choose your favourite>)
sklearn_knn = KNeighborsClassifier(k=<choose your favourite>)
###Output
_____no_output_____
###Markdown
House pricing https://www.kaggle.com/c/house-prices-advanced-regression-techniques
###Code
train_data = pd.read_csv("train_housing.csv")
train_data.head()
Y_train = train_data[['SalePrice']]
X_train = train_data.drop("SalePrice", axis=1)
###Output
_____no_output_____
###Markdown
Feature Processing
###Code
# def cat_to_numbers(data, columns):
# """
# turn categorical features into numerical
# data: pd.csv dataset
# columns: list of cstegorical columns to process
# """
# numerical_data = deepcopy(data)
# for column in columns:
# numerical_column = []
# numerical_dict = {}
# for item in data[column]:
# if item not in numerical_dict:
# numerical_dict[item] = len(numerical_dict)
# numerical_column.append(numerical_dict[item])
# numerical_data[column] = numerical_column
# return numerical_data
def binarize(data, columns):
"""
binarize feature
data: pd.csv dataset
columns: list of cstegorical columns to process
"""
binarized_data = deepcopy(data)
for column in columns:
unique_items = set(data[column])
for unique_item in unique_items:
new_column = []
for item in data[column]:
new_column.append(int(item==unique_item))
binarized_data[column+'_'+unique_item] = new_column
binarized_data.drop(column, axis=1, inplace=True)
return binarized_data
cat_features = [i[0] for i in dict(X_train.dtypes).items() if 'obj' in str(i[1])]
cat_features
from sklearn.preprocessing import LabelEncoder
for c in cat_features:
X_train[c] = LabelEncoder().fit_transform(X_train[c].fillna('_'))
X_train = X_train.fillna(0)
###Output
_____no_output_____
###Markdown
Linear Regressionhttp://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
lr = LinearRegression()
sc = StandardScaler()
X_train_sc = sc.fit_transform(X_train)
cross_val_score(lr, X_train_sc, Y_train, scoring='neg_mean_squared_error')
###Output
_____no_output_____
###Markdown
Посмотрим, какие коэффициенты получились у линейной регрессии:
###Code
lr.fit(X_train, Y_train)
lr.coef_
###Output
_____no_output_____
###Markdown
Посмотрим также на корреляцию признаков:
###Code
plt.subplots(figsize=(10,10))
# encoded_data, encoders = number_encode_features(df)
sns.heatmap(train_data.corr(), square=True)
plt.show()
###Output
_____no_output_____
###Markdown
Предсказание сердечно-сосудистых заболеваний https://mlbootcamp.ru/round/12/sandbox/
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
train_data.head()
X_train, Y_train = np.array(train_data.drop("cardio", axis=1)), np.array(train_data['cardio'])
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=100)
cross_val_score(rf, X_train, Y_train, scoring='accuracy')
#rf.fit(X_train, Y_train)
importances = rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(20, 8))
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), train_data.drop("cardio", axis=1).columns[indices])
plt.xlim([-1, X_train.shape[1]])
plt.show()
from sklearn.grid_search import GridSearchCV
rf = RandomForestClassifier(n_estimators=100)
params = {
'n_estimators': [50, 100, 500]
}
gsv = GridSearchCV(estimator=rf, param_grid=params, scoring='accuracy', cv=3, verbose=1)
gsv.fit(X_train, Y_train)
print(gsv.best_params_, gsv.best_score_)
###Output
{'n_estimators': 500} 0.6105857142857143
###Markdown
XGBoost
###Code
<do the same for GradientBoostingClassifier and other eatimators>
###Output
_____no_output_____
###Markdown
Model Ensembles Мы можем предсказывать не класс, а вероятности классов:
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
X_train, Y_train = np.array(train_data.drop("cardio", axis=1)), np.array(train_data['cardio'])
X_train, X_test, Y_train, Y_test = train_test_split(X_train, Y_train, test_size=0.15, random_state=37)
rf = RandomForestClassifier(n_estimators=100).fit(X_train, Y_train)
pred = rf.predict_proba(X_test)
pred
estimator1 = <choose your favourite>
estimator2 = <choose your second favourite>
estimator1.fit(X_train, Y_train)
estimator2.fit(X_train, Y_train)
pred1 = estimator1.predict_proba(X_test)[:, 1]
pred2 = estimator2.predict_proba(X_test)[:, 1]
pred = <compute average of all the estimators>
###Output
_____no_output_____
###Markdown
SKLearn (почти) каждый класс в SKLearn имеет следующие методы:
###Code
class KNN():
def __init__(self, n_neighbors=5, p=2, metric='minkowski'):
<your code>
def fit(self, X_train, y_train):
<your code>
def predict(self, X_test):
<your code>
def predict_proba(self, X_test):
<your code>
###Output
_____no_output_____
###Markdown
(у регрессий нет predict_proba, есть только predict)
###Code
from sklearn.neighbors import KNeighborsClassifier
my_knn = KNN(k=<choose your favourite>)
sklearn_knn = KNeighborsClassifier(k=<choose your favourite>)
###Output
_____no_output_____
###Markdown
Diabetes
###Code
from sklearn.datasets import load_diabetes
diabetes = load_diabetes()
diabetes_X = diabetes.data
diabetes_Y = diabetes.target
diabetes.DESCR.split('\n')
X_df = pd.DataFrame(diabetes_X, columns=diabetes.feature_names)
X_df.head()
###Output
_____no_output_____
###Markdown
Посмотрим на корреляции признаков между собой:
###Code
import matplotlib.pyplot as plt
plt.subplots(figsize=(10,10))
sns.heatmap(X_df.corr(), square=True)
plt.show()
###Output
_____no_output_____
###Markdown
Linear Regressionhttp://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
lr = LinearRegression()
np.mean(cross_val_score(lr, diabetes_X, diabetes_Y, scoring='neg_mean_squared_error', cv=5))
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(diabetes_X, diabetes_Y, test_size=0.2)
lr = LinearRegression().fit(x_train, y_train)
lr.predict(x_test).ravel().astype(int)
np.array(y_test).ravel()
###Output
_____no_output_____
###Markdown
Посмотрим, какие коэффициенты получились у линейной регрессии:
###Code
print(X_df.columns)
print(lr.coef_)
importances = np.abs(lr.coef_)
indices = np.argsort(importances)[::-1]
columns_num = len(X_df.columns)
plt.figure(figsize=(20, 8))
plt.title("Feature importances")
plt.bar(range(columns_num), importances[indices],
color="r", align="center")
plt.xticks(range(columns_num), X_df.columns[indices])
plt.xlim([-1, columns_num])
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
To-Do
###Code
<Попробуйте применить Lasso и Ridge регрессии к нашему набору данных>
###Output
_____no_output_____
###Markdown
Посмотрим также на корреляцию признаков:
###Code
from sklearn.linear_model import Lasso
llr = Lasso(alpha=0.1)
np.mean(cross_val_score(llr, diabetes_X, diabetes_Y, scoring='neg_mean_squared_error', cv=5))
llr = Lasso(alpha=0.1).fit(x_train, y_train)
X_df.corr()
res = pd.DataFrame(data = [llr.coef_], columns = X_df.columns.values)
res.head()
from sklearn.linear_model import Ridge
rlr = Ridge(alpha=0.1)
np.mean(cross_val_score(rlr, diabetes_X, diabetes_Y, scoring='neg_mean_squared_error', cv=5))
rlr = Ridge(alpha=0.1).fit(x_train, y_train)
res = pd.DataFrame(data = [rlr.coef_], columns = X_df.columns.values)
res.head()
###Output
_____no_output_____
###Markdown
Предсказание сердечно-сосудистых заболеваний https://mlbootcamp.ru/round/12/sandbox/
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
train_data.head()
X_train, Y_train = np.array(train_data.drop("cardio", axis=1).drop("id", axis=1)), np.array(train_data['cardio'])
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
np.mean(cross_val_score(rf, X_train, Y_train, scoring='accuracy', cv=5))
rf = RandomForestClassifier().fit(X_train, Y_train)
importances = rf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rf.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
plt.figure(figsize=(20, 8))
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), train_data.drop(["cardio", "id"], axis=1).columns[indices])
plt.xlim([-1, X_train.shape[1]])
plt.show()
from sklearn.grid_search import GridSearchCV
rf = RandomForestClassifier(max_depth=3)
params = {
'n_estimators': [10, 20],
'max_depth': [3, 10, 50, None],
'min_samples_split': [2, 3, 4]
}
gsv = GridSearchCV(estimator=rf, param_grid=params, scoring='accuracy', cv=3, verbose=2)
gsv.fit(X_train, Y_train)
print(gsv.best_params_, gsv.best_score_)
gsv.grid_scores_
###Output
_____no_output_____
###Markdown
XGBoost
###Code
<do the same for GradientBoostingClassifier and other eatimators>
###Output
_____no_output_____
###Markdown
Model Ensembles Мы можем предсказывать не класс, а вероятности классов:
###Code
train_data = pd.read_csv("train_med.csv", delimiter=';')
X_train, Y_train = np.array(train_data.drop("cardio", axis=1)), np.array(train_data['cardio'])
X_train, X_test, Y_train, Y_test = train_test_split(X_train, Y_train, test_size=0.15, random_state=37)
rf = RandomForestClassifier(n_estimators=10).fit(X_train, Y_train)
pred = rf.predict_proba(X_test)
pred
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
estimator1 = SVC()
estimator2 = DecisionTreeClassifier()
estimator1.fit(X_train, Y_train)
estimator2.fit(X_train, Y_train)
pred1 = estimator1.predict_proba(X_test)[:, 1]
pred2 = estimator2.predict_proba(X_test)[:, 1]
#pred = <compute average of all the estimators>
###Output
_____no_output_____ |
OOP_Concepts_of_Python.ipynb | ###Markdown
###Code
class TheClass:
pass #to create a class without method
class TheClass: #the name of the class
x = 40 #property of the class named MyClass
class Car(): #create a class name Car
def __init__(self,name,color):
self.name = name #represent the instance of class name Car
self.color = color
def description(self):
return self.name, self.color
def display (self):
print("The name and color of the car",self.description())
obj1 = Car("Ferrari","blue") #to create an object with its attribute values
obj1.display()
###Output
The name and color of the car ('Ferrari', 'blue')
###Markdown
Modify an Object Property
###Code
obj1.name="Mustang"
print(obj1.name)
obj1.display()
###Output
The name and color of the car ('Mustang', 'blue')
###Markdown
Delete Object Property
###Code
del obj1.color
obj1.display()
###Output
_____no_output_____
###Markdown
The name and color of the car ('BMW', 'Blue')Application 1 - Write a python program to compute the area and perimeter of a rectangle. Use Rectangle as class name, and length and width as attributes
###Code
class Rectangle():
def __init__(self, length, width):
self.length = length
self.width = width
def Area(self):
return self.length*self.width
def Perimeter(self):
return 2*(self.length+self.width)
def display(self):
print("The area of rectangle is ", self.Area())
print("The perimeter of rectangle is ", self.Perimeter())
pol = Rectangle(7,4.5
###Output
_____no_output_____
###Markdown
The area of rectangle is 31.5The perimeter of rectangle is 23.0Application 2 - Write a python program to display a class name OOP_58001 with your student no. and fullname (Surname, First Name) as attributes
###Code
class OOP_58002:
pass
class OOP_58002(): #create a class named OOP_58002
def __init__(self,number,name):
self.number = number #represents the student number
self.name = name #represents the student's full name
def StudentNumber(self):
return self.number
def FullName(self):
return self.name
def display(self):
print("I am from class OOP_58001")
print("My Student Number is", self.StudentNumber())
print("My Full Name is", self.FullName())
#to display an info with its attributes
info = OOP_58002("202110925", "Jastin Jason Espares")
info.display()
###Output
I am from class OOP_58001
My Student Number is 202110925
My Full Name is Jastin Jason Espares
|
pandas-1.ipynb | ###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ **Pandas I****Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on this data about the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We can put the population data into a Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` for each
import pandas as pd
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "integer location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but it would be much easier to use if our Series had names like a Python dictionary. We can It is cumbersome to remember the index number for each country, so we can instead give each row an index with names.
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
# If we wanted to save this to a new series variable
#new_series = worldpop[worldpop > 200]
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
print(worldpop)
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing a `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ Pandas 1**Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows using `.loc` and `.iloc`* Create filters using boolean operators* Changing data in the SeriesThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 60 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rowsThis lesson introduces their basic affordances. Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We will put these population numbers into a Pandas Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` when we call pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
To create our Series, we pass a list into the Series method:`variable_name = pd.Series([1, 2, 3])`
###Code
# Create a data series object in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data are 64-bit integers. We can assign a name to our series using `.name`.
###Code
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
`.iloc[]` Integer Location SelectionTo the left of each row in a Series are index numbers. The index numbers are similar to the index numbers for a Python list; they help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "index location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
###Output
_____no_output_____
###Markdown
Just like a python list, we can also slice a series into a smaller series. When slicing a Pandas series, the new series will not include the final index row.
###Code
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but we can also give each row an identifier (like a key within a Python dictionary). We do this by using:`series_name.index = [name_1, name_2, name_3]`Since we are storing the populations of countries, it would also be helpful to include the name of each country within our index.
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, similar to how we can supply a key to a dictionary to get a value. We use the `.loc[]` attribute to reference by name (as opposed to integer/index location using `.iloc[]`.Try finding the value for Nigeria using both `iloc[]` and `.loc[]` selection.
###Code
# Use `.iloc[]` to return the series value for Nigeria
# Use `.loc[]` to return the series value for Nigeria
###Output
_____no_output_____
###Markdown
Instead of a value, we can return a new series by supplying a list. This will return the value *with the index names* as well.
###Code
# Return a new series containing only Nigeria
# Note that we use two sets of brackets
worldpop.loc[['Nigeria']]
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
###Output
_____no_output_____
###Markdown
Instead of supplying a list of every index name, we can use a slice notation using a `:`. There is, however, a significant difference in how this slice is created with *index names*: the final named index **is included**.
###Code
# Return a slice from Nigeria to Russia
# This slice will include the final element!
# This behavior is different than a list slice
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
Although we created this Pandas series from a list, a series with index names is kind of like an ordered dictionary. Indeed, we could have created our Pandas series from a dictionary instead of a list.
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
citiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)') # We can also specify the series name as an argument
#Return the series
citiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
By passing this expression into `.iloc[]`, we can retrieve just the rows that would evaluate to `True`.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
###Output
_____no_output_____
###Markdown
Note that we have not changed the values of `worldpop` but only evaluated an expression. `worldpop` remains the same.
###Code
worldpop
###Output
_____no_output_____
###Markdown
If we wanted to store the evaluation, we would need to use an assignment statement, either for `worldpop` or a new variable.
###Code
# If we wanted to save this to a new series variable
new_series = worldpop.loc[worldpop > 200]
new_series
###Output
_____no_output_____
###Markdown
We can also evaluate multiple expressions, but there is a difference in syntax between Python generally and Pandas. Python Boolean expressions are written `and`, `or` and `not`. Pandas Boolean expressions are written `&`, `|`, and `~`.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All must be `True`||\||or|If any are `True`||~|not|The opposite|Try returning a series from `worldpop` using `.loc[]` for countries with populations either over 500 or under 250.
###Code
# Return a series from `worldpop` with populations
# over 500 or under 250
###Output
_____no_output_____
###Markdown
Modifying a SeriesSo far, we have been returning expressions but not actually changing our original Pandas series object. We can use an initialization statement to make a change to the original series object The syntax is very similar to changing an item value in a Python dictionary.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
worldpop
###Output
_____no_output_____
###Markdown
We could also change the value of multiple rows based on an expression.
###Code
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ Pandas I**Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We will put these population numbers into a Pandas Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` when we call pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
To create our Series, we pass a list into the Series method:`variable_name = pd.Series([1, 2, 3])`
###Code
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "index location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but we can also give each row an identifier (like a key within a Python dictionary). We do this by using:`series_name.index = [name_1, name_2, name_3]`
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
###Output
_____no_output_____
###Markdown
Instead of a value, we can return a new series by supplying a list. This will return the value *with the index names* as well.
###Code
# Return a new series containing only Nigeria
# Note that we use two sets of brackets
worldpop.loc[['Nigeria']]
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
# This behavior is different than a list slice
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series by putting the expression into `.loc[]`.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
###Output
_____no_output_____
###Markdown
Note that we have not changed the values of `worldpop` but only evaluated the expression. `worldpop` remains the same.
###Code
worldpop
###Output
_____no_output_____
###Markdown
If we wanted to store the evaluation, we would need to use an assignment statement, either for `worldpop` or a new variable.
###Code
# If we wanted to save this to a new series variable
new_series = worldpop[worldpop > 200]
new_series
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series. The syntax is very similar to changing an item value in a list.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
worldpop
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file. We can import a .csv file with `.read_csv()` method, passing in the csv location. We can also supply an index column name with `index_col`.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
By default, `inplace=False` which means that Pandas will output what the change *would* look like but no make changes to the dataframe. It is a preview of the changes. This feature is intentional to make sure the user does not accidentally make a permanent change. **There is no undo! Always keep a backup of your file and do not write changes over the original file unless you are sure they are correct.**Passing `inplace=True` tells Pandas to make the change immediately without any preview. Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ Pandas I**Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We will put these population numbers into a Pandas Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` when we call pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
To create our Series, we pass a list into the Series method:`variable_name = pd.Series([1, 2, 3])`
###Code
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "index location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but we can also give each row an identifier (like a key within a Python dictionary). We do this by using:`series_name.index = [name_1, name_2, name_3]`
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
###Output
_____no_output_____
###Markdown
Instead of a value, we can return a new series by supplying a list. This will return the value *with the index names* as well.
###Code
# Return a new series containing only Nigeria
# Note that we use two sets of brackets
worldpop.loc[['Nigeria']]
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
# This behavior is different than a list slice
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series by putting the expression into `.loc[]`.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
###Output
_____no_output_____
###Markdown
Note that we have not changed the values of `worldpop` but only evaluated the expression. `worldpop` remains the same.
###Code
worldpop
###Output
_____no_output_____
###Markdown
If we wanted to store the evaluation, we would need to use an assignment statement, either for `worldpop` or a new variable.
###Code
# If we wanted to save this to a new series variable
new_series = worldpop[worldpop > 200]
new_series
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series. The syntax is very similar to changing an item value in a list.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
worldpop
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file. We can import a .csv file with `.read_csv()` method, passing in the csv location. We can also supply an index column name with `index_col`.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
By default, `inplace=False` which means that Pandas will output what the change *would* look like but no make changes to the dataframe. It is a preview of the changes. This feature is intentional to make sure the user does not accidentally make a permanent change. **There is no undo! Always keep a backup of your file and do not write changes over the original file unless you are sure they are correct.**Passing `inplace=True` tells Pandas to make the change immediately without any preview. Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____
###Markdown
Get data from: http://ucsdlib.github.io/win2016-python-gps/ Or here: http://goo.gl/Aqbx3q* Put in your desktop folder, working directory and where you open Jupyter
###Code
!ls
pandas.read_csv('A1_mosquito_data.csv')
data = pandas.read_csv('A1_mosquito_data.csv')
print(data)
data #prints out a table
###Output
_____no_output_____
###Markdown
See what type of python object:
###Code
print(type(data))
###Output
<class 'pandas.core.frame.DataFrame'>
###Markdown
* can select subsets using slicing - first 2 rows
###Code
print(data[0:2])
data[1] #can't ask for single row
###Output
_____no_output_____
###Markdown
* [1] could mean multiple things
###Code
print(data[1:2]) #this works
print(data.iloc[1]) #inter location
print(data.iloc[1:3])
data['temperature']
data.temperature
###Output
_____no_output_____
###Markdown
* subset based on value from other rows
###Code
print(data['temperature'][data['year']>2005])
###Output
5 75
6 80
7 85
8 74
9 74
Name: temperature, dtype: int64
###Markdown
* perform common math* avg value for each variable
###Code
print(data.mean())
print(data.max())
print(data['temperature'].min())
print(data['mosquitos'][1:3].std())
###Output
45.2548339959
###Markdown
* we can loop over the data frame using what we learned last week
###Code
for column_name in ["temperature", "rainfall"]:
print("Max of", column_name, ":", data[column_name].max())
print("Max of", "temperature", ":", data["temperature"].max())
print("Max of", "rainfall", ":", data["rainfall"].max())
data.columns
%matplotlib inline
data["temperature"].plot(color="black", marker="*")
data
data = data.set_index('year')
data
data['temperature'].plot()
###Output
_____no_output_____
###Markdown
* subset by row
###Code
data[data["temperature"] > 80]
###Output
_____no_output_____
###Markdown
Display the mosquitos column when rainfall is less than 200
###Code
data[data["rainfall"] < 200]['mosquitos']
data[data["rainfall"] < 200]["mosquitos"].max()
data["mosquitos"][data["rainfall"] < 200]
data["rainfall"] < 200
mosquitos_with_low_rainfall = data["mosquitos"][data["rainfall"] < 200]
mosquitos_with_low_rainfall
type(mosquitos_with_low_rainfall)
mosquitos_with_low_rainfall.index
data.plot()
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ Pandas I**Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___
###Code
### Download Sample Files for this Lesson
import urllib.request
download_urls = [
'https://ithaka-labs.s3.amazonaws.com/static-files/images/tdm/tdmdocs/sample2.csv'
]
for url in download_urls:
urllib.request.urlretrieve(url, './data/' + url.rsplit('/', 1)[-1])
###Output
_____no_output_____
###Markdown
When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We will put these population numbers into a Pandas Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` when we call pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
To create our Series, we pass a list into the Series method:`variable_name = pd.Series([1, 2, 3])`
###Code
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "index location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but we can also give each row an identifier (like a key within a Python dictionary). We do this by using:`series_name.index = [name_1, name_2, name_3]`
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
###Output
_____no_output_____
###Markdown
Instead of a value, we can return a new series by supplying a list. This will return the value *with the index names* as well.
###Code
# Return a new series containing only Nigeria
# Note that we use two sets of brackets
worldpop.loc[['Nigeria']]
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
# This behavior is different than a list slice
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series by putting the expression into `.loc[]`.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
###Output
_____no_output_____
###Markdown
Note that we have not changed the values of `worldpop` but only evaluated the expression. `worldpop` remains the same.
###Code
worldpop
###Output
_____no_output_____
###Markdown
If we wanted to store the evaluation, we would need to use an assignment statement, either for `worldpop` or a new variable.
###Code
# If we wanted to save this to a new series variable
new_series = worldpop[worldpop > 200]
new_series
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series. The syntax is very similar to changing an item value in a list.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
worldpop
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file. We can import a .csv file with `.read_csv()` method, passing in the csv location. We can also supply an index column name with `index_col`.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
By default, `inplace=False` which means that Pandas will output what the change *would* look like but no make changes to the dataframe. It is a preview of the changes. This feature is intentional to make sure the user does not accidentally make a permanent change. **There is no undo! Always keep a backup of your file and do not write changes over the original file unless you are sure they are correct.**Passing `inplace=True` tells Pandas to make the change immediately without any preview. Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ **Pandas I****Description:** This notebook describes how to:* Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on this data about the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We can put the population data into a Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` for each
import pandas as pd
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "integer location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but it would be much easier to use if our Series had names like a Python dictionary. We can It is cumbersome to remember the index number for each country, so we can instead give each row an index with names.
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
# If we wanted to save this to a new series variable
#new_series = worldpop[worldpop > 200]
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
print(worldpop)
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing a `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____
###Markdown
Created by [Nathan Kelber](http://nkelber.com) and Ted Lawless for [JSTOR Labs](https://labs.jstor.org/) under [Creative Commons CC BY License](https://creativecommons.org/licenses/by/4.0/)For questions/comments/improvements, email [email protected].___ Pandas I**Description:** This notebook describes how to:* FFORCE Test some quick editing of a jupyter notebook * Create a Pandas Series or DataFrame* Accessing data rows, columns, elements using `.loc` and `.iloc`* Creating filters using boolean operators* Changing data in rows, columns, and elementsThis is the first notebook in a series on learning to use Pandas. **Use Case:** For Learners (Detailed explanation, not ideal for researchers)**Difficulty:** Intermediate**Knowledge Required:** * Python Basics ([Start Python Basics I](./python-basics-1.ipynb))**Knowledge Recommended:** * [Working with Dataset Files](./working-with-dataset-files.ipynb)**Completion Time:** 75 minutes**Data Format:** CSV (.csv)**Libraries Used:** Pandas**Research Pipeline:** None___ When to use Pandas Pandas is a Python data analysis and manipulation library. When it comes to viewing and manipulating data, most people are familiar with commercial spreadsheet software, such as Microsoft Excel or Google Sheets. While spreadsheet software and Pandas can accomplish similar tasks, each has significant advantages depending on the use-case.**Advantages of Spreadsheet Software*** Point and click* Easier to learn* Great for small datasets (<10,000 rows)* Better for browsing data**Advantages of Pandas*** More powerful data manipulation with Python* Can work with large datasets (millions of rows)* Faster for complicated manipulations* Better for cleaning and/or pre-processing data* Can automate workflows in a larger data pipelineIn short, spreadsheet software is better for browsing small datasets and making moderate adjustments. Pandas is better for automating data cleaning processes that require large or complex data manipulation.Pandas can interpret a wide variety of data sources, including Excel files, CSV files, and Python objects like lists and dictionaries. Pandas converts these into two fundamental objects: * Data Series- a single column of data* DataFrame- a table of data containing multiple columns and rows Pandas SeriesWe can think of a Series as a single column of data. A DataFrame then is made by combining Series objects side-by-side into a table that has both height and width. Let's create a Series based on the world's ten most-populated countries [according to Wikipedia](https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population).|Population (in millions)||---||1,404||1,366||330||269||220||211||206||169||146||127|We will put these population numbers into a Pandas Series.
###Code
# import pandas, `as pd` allows us to shorten typing `pandas` to `pd` when we call pandas
import pandas as pd
###Output
_____no_output_____
###Markdown
To create our Series, we pass a list into the Series method:`variable_name = pd.Series([1, 2, 3])`
###Code
# Create a data series in Pandas
worldpop = pd.Series([1404, 1366, 330, 269, 220, 211, 206, 169, 146, 127])
# Give our series a name
worldpop.name = 'World Population (In Millions)'
print(worldpop)
###Output
_____no_output_____
###Markdown
Underneath the Series is a `dtype` which describes the way the data is stored in the Series. Here we see `int64`, denoting the data is a 64-bit integer. `.iloc[]` Integer Location SelectionTo the left of each Series is an index number. This index number is very similar to a Python list index; it can help us reference a particular row for data retrieval. Also, like a Python list, the index to a Series begins with 0. We can retrieve individual elements in a Series using the `.iloc` attribute, which stands for "index location."
###Code
# Return the 4th element in our series
worldpop.iloc[3]
# Return a slice of elements in our series
# This slice will not include element 4
worldpop.iloc[2:4]
###Output
_____no_output_____
###Markdown
By default, our Series has a numerical index like a Python list, but we can also give each row an identifier (like a key within a Python dictionary). We do this by using:`series_name.index = [name_1, name_2, name_3]`
###Code
# Rename the index to use names instead of numerical indexes
worldpop.index = [
'China',
'India',
'United States',
'Indonesia',
'Pakistan',
'Brazil',
'Nigeria',
'Bangladesh',
'Russia',
'Mexico'
]
worldpop
###Output
_____no_output_____
###Markdown
`.loc[]` Location SelectionNow we can also reference each element by its index name, very similar to how we can supply a key to a dictionary to get a value. We use the `.loc` attribute.
###Code
# Return the series value for Nigeria
worldpop.loc['Nigeria']
###Output
_____no_output_____
###Markdown
Instead of a value, we can return a new series by supplying a list. This will return the value *with the index names* as well.
###Code
# Return a new series containing only Nigeria
# Note that we use two sets of brackets
worldpop.loc[['Nigeria']]
# Return a series value for Indonesia and Mexico
worldpop.loc[['Indonesia', 'Mexico']]
# Return a slice from Nigeria to Russia
# This slice will include the final element!
# This behavior is different than a list slice
worldpop.loc['Nigeria':'Russia']
###Output
_____no_output_____
###Markdown
A Series is like an ordered dictionary. In fact, we can create a Series out of a list (where the index will automatically be numerical starting at 0) or a dictionary (where the keys are the index).
###Code
# Creating a Series from a dictionary
# Based on most populous cities in the world according to Wikipedia
worldcitiespop = pd.Series({
'Tokyo': 37,
'Delhi': 28,
'Shanghai': 25,
'São Paulo': 21,
'Mexico City': 21,
'Cairo': 20,
'Mumbai': 19,
'Beijing': 19,
'Dhaka': 19,
'Osaka': 19,
}, name='World City Populations (In Millions)')
#Return the series
worldcitiespop
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe have seen already how we can select a particular value in a series by using an index name or number. We can also select particular values using Boolean expressions. An expression will evaluate to a Truth Table.
###Code
# Which countries have populations greater than 200 million?
worldpop > 200
###Output
_____no_output_____
###Markdown
Instead of evaluating to a Truth Table, we can also evaluate to a smaller series by putting the expression into `.loc[]`.
###Code
# Evaluate worldpop for `worldpop > 200`
worldpop.loc[worldpop > 200]
###Output
_____no_output_____
###Markdown
Note that we have not changed the values of `worldpop` but only evaluated the expression. `worldpop` remains the same.
###Code
worldpop
###Output
_____no_output_____
###Markdown
If we wanted to store the evaluation, we would need to use an assignment statement, either for `worldpop` or a new variable.
###Code
# If we wanted to save this to a new series variable
new_series = worldpop[worldpop > 200]
new_series
###Output
_____no_output_____
###Markdown
Pandas uses `|` to represent `or` operations. It uses `&` to represent `and` operations. We can also use `~` for negation.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
worldpop.loc[(worldpop > 500) | (worldpop < 250)]
###Output
_____no_output_____
###Markdown
Modifying a SeriesWe can use an initialization statement to change a value in our Series. The syntax is very similar to changing an item value in a list.
###Code
# Change the population of China to 1500
worldpop.loc['China'] = 1500
worldpop
# Change the population of several countries based on an expression
worldpop.loc[worldpop < 300] = 25
worldpop
###Output
_____no_output_____
###Markdown
Summary of Pandas Series* A Series is a single column of data that may contain a Name and Index* Use `.iloc` to select a row by index number* Use `.loc` to select a row by index name* Use an initialization statement to change values* Boolean operators include & (and), | (or), ~ (negation) Pandas DataFrameIf a Series is like a column of data, a DataFrame is like a table connecting multiple columns together. DataFrames can contain thousands or millions of rows and columns. When working with DataFrames, we are usually using a dataset that has been compiled by someone else. Often the data will be in the form of a CSV or Excel file. We can import a .csv file with `.read_csv()` method, passing in the csv location. We can also supply an index column name with `index_col`.
###Code
import pandas as pd
# Create a DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
###Output
_____no_output_____
###Markdown
Exploring DataFrame ContentsNow that we have a DataFrame called `df`, we need to learn a little more about its contents. The first step is usually to explore the DataFrame's attributes. Attributes are properties of the dataset (not functions), so they do not have parentheses `()` after them. |Attribute|Reveals||---|---||.shape| The number of rows and columns||.info| The shape plus the first and last 5 rows||.columns| The name of each column||.rows| The name of each row|
###Code
# Use `.shape` to find rows and columns in the DataFrame
df.shape
# Use `.info` to find the shape plus the first and last five rows of the DataFrame
df.info
# Use `.columns` to find the name of each column (if they are named)
df.columns
###Output
_____no_output_____
###Markdown
We can use `.index` attribute to discover the name for each row in our DataFrame. We set the index column to `Username`, but `Identifier` would also make sense. If no column is chosen, a numeric index is created starting at 0.
###Code
# Use `.index` to list the rows of our DataFrame
df.index
###Output
_____no_output_____
###Markdown
Preview with `.head()` and `.tail()`We can also use the `.head()` and `.tail` methods to get a preview of our DataFrame.
###Code
# Use `.head()` to see the first five lines
# Pass an integer into .head() to see a different number of lines
df.head()
# Use `.tail()` to see the last five lines
# Pass an integer into .tail() to see a different number lines
df.tail()
###Output
_____no_output_____
###Markdown
Display More Rows or ColumnsBy default, Pandas limits the number of rows and columns to display. If desired, we can increase or decrease the number to display. If your DataFrame has limited number of rows or columns, you may wish to show all of them.
###Code
# Show all columns
# Set `None` to an integer to show a set number
pd.set_option('display.max_columns', None)
# Show all rows
# Set `None` to an integer to show a set number
# Be careful if your dataset is thousands of lines long!
pd.set_option('display.max_rows', None)
###Output
_____no_output_____
###Markdown
Change Column NamesIf we wanted to change the column names, one option is to modify the original data file. We can also change the column names in the DataFrame.
###Code
# Updating all column names at once
df.columns = ['email', 'Identifier', 'First name', 'Last name']
df
# Updating a single column name
df.rename(columns={'email': 'Login email'}, inplace=True)
df
###Output
_____no_output_____
###Markdown
By default, `inplace=False` which means that Pandas will output what the change *would* look like but no make changes to the dataframe. It is a preview of the changes. This feature is intentional to make sure the user does not accidentally make a permanent change. **There is no undo! Always keep a backup of your file and do not write changes over the original file unless you are sure they are correct.**Passing `inplace=True` tells Pandas to make the change immediately without any preview. Reset the IndexWhen we created the dataframe, we used the `index_col` attribute to set the index column to the `Username` column.```df = pd.read_csv('data/sample2.csv', index_col='Username')```We could reset the index to a numerical index starting at 0 using the `.reset_index()` method.
###Code
# Reset the Index for the DataFrame to integers
# creating a new column
# Passing `inplace=True` makes the change immediately
df.reset_index()
###Output
_____no_output_____
###Markdown
For many operations that will alter a DataFrame, such as `.reset_index`, the changes will be previewed unless a `inplace=True` parameter is passed. This allows users to preview changes to the data before implementing them in a permanent fashion. Of course, you should always work on a copy of your data in case a manipulation goes awry.
###Code
# Confirm index has not been changed
df
# Make the change to reset the index
df.reset_index(inplace=True)
# Print the index, now changed
df
# Change the index back to `Username`
df.set_index('Username', inplace=True)
df
###Output
_____no_output_____
###Markdown
Sorting the IndexWe can sort the index by using `sort_index()`.
###Code
# Sort the DataFrame by ascending order
df.sort_index()
# Sort by descending order
df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
`.loc[]` and `.iloc[]` SelectionLike Series, DataFrames can use the `.iloc[]` and `.loc[]` methods for selection. To select a particular element, we need to supply a row *and* a column.
###Code
# View our DataFrame for reference
df
# Return the value for the specified row and column
df.iloc[6, 3]
# Return the value for the specified row and column
df.loc['booker12', 'First name']
# Select an entire row
df.loc['redtree333', :]
###Output
_____no_output_____
###Markdown
Technically, we could also use: `df.loc['redtree333']` for the same result, but including the `, :` makes our row and column selections explicit, where the `:` is basically a slice that includes the whole column. Using a `:` is required if we want to select an entire column using `.loc[]` since the row selection comes before the column selection.
###Code
# Select an entire column
df.loc[:, 'Login email']
###Output
_____no_output_____
###Markdown
Of course, we can use the `:` to make a slice using `.loc[]` or `.loc`.
###Code
# Slicing rows and columns using `.iloc`
df.iloc[0:3, 1:4]
###Output
_____no_output_____
###Markdown
**Note that `.iloc[]` slicing is not inclusive of the final value, similar to a Python list**. On the other hand, `.loc[]` slicing *is* inclusive. The reason for this difference is that it would make the code confusing since we would need to include whatever name is *after* the name we want to include.
###Code
# Slicing rows and columns using `.loc`
df.loc['booker12':'french999', 'Login email':'First name']
###Output
_____no_output_____
###Markdown
Boolean ExpressionsWe can also use Boolean expressions to select based on the contents of the elements. We can use these expressions to create filters for selecting particular rows or columns.|Pandas Operator|Boolean|Requires||---|---|---||&|and|All required to `True`||\||or|If any are `True`||~|not|The opposite|
###Code
df
# Return a Truth Table for the `Identifier` column
# Where the Identifier is more than 4000
df.loc[:, 'Identifier'] > 4000
# Preview every row where the Identifier is more than 4000
id_filter = (df.loc[:, 'Identifier'] > 4000)
df.loc[id_filter, :]
# Alternatively, the whole expression can be written out
# But this can be a little more difficult to read
# In this case, it is a good idea to include parentheses
# To make clear the row filter is one expression
#df.loc[(df.loc[:, 'Identifier'] > 4000), :]
# Preview every row with Last name not "Smith"
name_filter = df.loc[:, 'Last name'] == 'Smith'
df.loc[name_filter, :]
# Select the row with `First Name` of Jamie
# And last name of `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith') & (df.loc[:, 'First name'] == 'Jamie')
df.loc[name_filter, :]
# Find every row with Last Name not `Smith`
name_filter = (df.loc[:, 'Last name'] == 'Smith')
df.loc[~name_filter, :]
# Or alternatively
#name_filter = (df.loc[:, 'Last name'] != 'Smith')
#df.loc[name_filter, :]
###Output
_____no_output_____
###Markdown
Modifying a DataFrameA single element can be changed with an initialization statement.
###Code
# Change a value using `.loc[]`
df.loc['jenkins46', 'First name'] = 'Mark'
df
###Output
_____no_output_____
###Markdown
We can also use filters for more powerful manipulation.
###Code
# Create a string filter that checks for email addresses containing
# 'example.com'. For missing (na) elements, output `False` instead of NaN.
email_filt = df['Login email'].str.contains('example.com', na=False)
email_filt
# Re-Initialize `df` without the users with no email address
df = df[email_filt]
df
###Output
_____no_output_____
###Markdown
Dropping Rows Without DataThere is also a `.dropna()` method specifically for dropping rows without data
###Code
# Recreate the DataFrame `df` from the CSV file 'sample2.csv'
df = pd.read_csv('data/sample2.csv', index_col='Username')
df # Confirm the NaN fields have returned
# Remove all rows without a `Login email` using `.dropna()`
df = df.dropna(subset=['Login email'])
df # Confirm the fields were dropped
###Output
_____no_output_____ |
notebooks/adversarial_retraining.ipynb | ###Markdown
IntroductionThis notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper:M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*.The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4.
###Code
import warnings
warnings.filterwarnings('ignore')
from keras.datasets import mnist, cifar10
from keras.models import load_model
from keras.utils.np_utils import to_categorical
import numpy as np
from art import config
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import ProjectedGradientDescent
from art.utils import get_file
###Output
Using TensorFlow backend.
###Markdown
MNISTThree different MNIST models are available. Use the following URLs to access them:- `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1)- `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1)- `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('mnist_ratio=0.5.h5',extract=False, path=config.ART_DATA_PATH,
url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 10000
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10000 benign test samples: 0.995100
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 10
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10 adversarial test samples: 0.900000
###Markdown
CIFAR-10Similarly to MNIST, three different CIFAR-10 models are available at the following URLs:- `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1)- `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1)- `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=config.ART_DATA_PATH,
url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 100
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 benign test samples: 0.940000
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 100
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 adversarial test samples: 0.470000
###Markdown
IntroductionThis notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper:M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*.The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4.
###Code
import warnings
warnings.filterwarnings('ignore')
from keras.datasets import mnist, cifar10
from keras.models import load_model
from keras.utils.np_utils import to_categorical
import numpy as np
from art import DATA_PATH
from art.classifiers import KerasClassifier
from art.attacks import ProjectedGradientDescent
from art.utils import get_file
###Output
Using TensorFlow backend.
###Markdown
MNISTThree different MNIST models are available. Use the following URLs to access them:- `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1)- `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1)- `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('mnist_ratio=0.5.h5',extract=False, path=DATA_PATH,
url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 10000
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10000 benign test samples: 0.995100
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 10
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10 adversarial test samples: 1.000000
###Markdown
CIFAR-10Similarly to MNIST, three different CIFAR-10 models are available at the following URLs:- `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1)- `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1)- `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=DATA_PATH,
url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 100
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 benign test samples: 0.940000
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 100
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 adversarial test samples: 0.450000
###Markdown
IntroductionThis notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper:M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*.The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4.
###Code
import warnings
warnings.filterwarnings('ignore')
from keras.datasets import mnist, cifar10
from keras.models import load_model
from keras.utils.np_utils import to_categorical
import numpy as np
from art.config import ART_DATA_PATH
from art.classifiers import KerasClassifier
from art.attacks import ProjectedGradientDescent
from art.utils import get_file
###Output
Using TensorFlow backend.
###Markdown
MNISTThree different MNIST models are available. Use the following URLs to access them:- `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1)- `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1)- `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('mnist_ratio=0.5.h5',extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 10000
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10000 benign test samples: 0.995100
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 10
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10 adversarial test samples: 1.000000
###Markdown
CIFAR-10Similarly to MNIST, three different CIFAR-10 models are available at the following URLs:- `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1)- `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1)- `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 100
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 benign test samples: 0.940000
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 100
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 adversarial test samples: 0.450000
###Markdown
IntroductionThis notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper:M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*.The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4.
###Code
import warnings
warnings.filterwarnings('ignore')
from keras.datasets import mnist, cifar10
from keras.models import load_model
from keras.utils.np_utils import to_categorical
import numpy as np
from art.config import ART_DATA_PATH
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import ProjectedGradientDescent
from art.utils import get_file
###Output
Using TensorFlow backend.
###Markdown
MNISTThree different MNIST models are available. Use the following URLs to access them:- `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1)- `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1)- `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('mnist_ratio=0.5.h5',extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 10000
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10000 benign test samples: 0.995100
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 10
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10 adversarial test samples: 0.900000
###Markdown
CIFAR-10Similarly to MNIST, three different CIFAR-10 models are available at the following URLs:- `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1)- `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1)- `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=ART_DATA_PATH,
url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 100
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 benign test samples: 0.940000
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 100
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 adversarial test samples: 0.470000
###Markdown
IntroductionThis notebook shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and trained as described in the following paper:M.Sinn, M.Wistuba, B.Buesser, M.-I.Nicolae, M.N.Tran: **Evolutionary Search for Adversarially Robust Neural Network** *ICLR SafeML Workshop 2019 (arXiv link to the paper will be added shortly)*.The models were saved in `.h5` using Python 3.6, TensorFlow 1.11.0, Keras 2.2.4.
###Code
import warnings
warnings.filterwarnings('ignore')
from keras.datasets import mnist, cifar10
from keras.models import load_model
from keras.utils.np_utils import to_categorical
import numpy as np
from art import config
from art.estimators.classification import KerasClassifier
from art.attacks.evasion import ProjectedGradientDescent
from art.utils import get_file
###Output
Using TensorFlow backend.
###Markdown
MNISTThree different MNIST models are available. Use the following URLs to access them:- `mnist_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/bv1xwjaf1ov4u7y/mnist_ratio%3D0.h5?dl=1)- `mnist_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1)- `mnist_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/oa2kowq7kgaxh1o/mnist_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') / 255
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32') / 255
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('mnist_ratio=0.5.h5',extract=False, path=config.ART_DATA_PATH,
url='https://www.dropbox.com/s/0skvoxjd6klvti3/mnist_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,1])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 10000
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10000 benign test samples: 0.995100
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=0.3, eps_step=0.01, max_iter=40, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 10
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 10 adversarial test samples: 0.900000
###Markdown
CIFAR-10Similarly to MNIST, three different CIFAR-10 models are available at the following URLs:- `cifar-10_ratio=0.h5`: trained on 100% benign samples (https://www.dropbox.com/s/hbvua7ynhvara12/cifar-10_ratio%3D0.h5?dl=1)- `cifar-10_ratio=0.5.h5`: trained on 50% benign and 50% adversarial samples (https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1)- `cifar-10_ratio=1.h5`: trained on 100% adversarial samples (https://www.dropbox.com/s/7btc2sq7syf68at/cifar-10_ratio%3D1.h5?dl=1) Load data:
###Code
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
X_train = X_train.reshape(X_train.shape[0], 32, 32, 3).astype('float32')
X_test = X_test.reshape(X_test.shape[0], 32, 32, 3).astype('float32')
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
E.g. load the model trained on 50% benign and 50% adversarial samples:
###Code
path = get_file('cifar-10_ratio=0.5.h5',extract=False, path=config.ART_DATA_PATH,
url='https://www.dropbox.com/s/96yv0r2gqzockmw/cifar-10_ratio%3D0.5.h5?dl=1')
model = load_model(path)
classifier = KerasClassifier(model=model, use_logits=False, clip_values=[0,255])
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` benign test samples:
###Code
n = 100
y_pred = classifier.predict(X_test[:n])
accuracy = np.mean(np.argmax(y_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i benign test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 benign test samples: 0.940000
###Markdown
Define adversarial attack:
###Code
attack = ProjectedGradientDescent(classifier, eps=8, eps_step=2, max_iter=10, targeted=False,
num_random_init=True)
###Output
_____no_output_____
###Markdown
Assess accuracy on first `n` adversarial test samples:
###Code
n = 100
X_test_adv = attack.generate(X_test[:n], y=y_test[:n])
y_adv_pred = classifier.predict(X_test_adv)
accuracy = np.mean(np.argmax(y_adv_pred, axis=1) == np.argmax(y_test[:n], axis=1))
print("Accuracy on first %i adversarial test samples: %f" % (n, accuracy))
###Output
Accuracy on first 100 adversarial test samples: 0.470000
|
notebooks/Chapter11/MedianSmoothing_with_ResNet50.ipynb | ###Markdown
Median Smoothing[Xu et al. in “Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks”](https://arxiv.org/abs/1704.01155). 1. 事前準備と確認 ライブラリのインポート
###Code
from imagenet_util import *
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage.filters import median_filter # Median Filter 用の関数
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.applications.resnet50 import preprocess_input
###Output
_____no_output_____
###Markdown
ResNet50 を生成
###Code
model = ResNet50(weights='imagenet')
###Output
Downloading data from https://github.com/keras-team/keras-applications/releases/download/resnet/resnet50_weights_tf_dim_ordering_tf_kernels.h5
102973440/102967424 [==============================] - 6s 0us/step
###Markdown
オリジナル画像のロード
###Code
original_image_path = '../../images/chihuahua2.jpg'
original_image = load_img(original_image_path, target_size=(224, 224))
original_image = img_to_array(original_image)
###Output
_____no_output_____
###Markdown
オリジナル画像の推論と分類
###Code
# 推論
Y_hat = model.predict(np.expand_dims(preprocess_input(original_image.copy()), 0))
# 分類
original_class, original_name, original_score = get_top_pred(Y_hat)
# 分類結果とスコアを表示
print('Prediction: {0} - score {1:.2f}%'.format(original_name, original_score * 100))
###Output
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/imagenet_class_index.json
40960/35363 [==================================] - 0s 0us/step
Prediction: Chihuahua - score 67.11%
###Markdown
Run the test images through it. 敵対的サンプル (npy) のロード
###Code
# JSMA で生成した敵対的サンプルをロード
adv_image = np.load('../../data/chihuahua_jsma.npy')
###Output
_____no_output_____
###Markdown
敵対的サンプルの推論と分類
###Code
# 推論
Y_hat_adv = model.predict(np.expand_dims(preprocess_input(adv_image.copy()), 0))
# 分類
adv_class, adv_name, adv_score = get_top_pred(Y_hat_adv)
# 分類結果とスコアを表示
print('Prediction: {0} - score {1:.2f}%'.format(adv_name, adv_score * 100))
###Output
Prediction: weasel - score 23.78%
###Markdown
2. Median Smoothing の実装と適用 median_smoothing 関数
###Code
def median_smoothing(image, filter_size=3):
return median_filter(image, size=(filter_size, filter_size, 1) , mode="reflect")
###Output
_____no_output_____
###Markdown
Median Smoothing の適用
###Code
original_ms_image = median_smoothing(original_image)
adv_ms_image = median_smoothing(adv_image)
###Output
_____no_output_____
###Markdown
Median Smoothing 適用後の「オリジナル画像」の推論と分類
###Code
# 推論
Y_hat_ms = model.predict(np.expand_dims(preprocess_input(original_ms_image.copy()), 0))
# 分類
original_ms_class, original_ms_name, original_ms_score = get_top_pred(Y_hat_ms)
# 分類結果とスコアを表示
print('Prediction : {0} - score {1:.2f}%'.format(original_ms_name, original_ms_score * 100))
###Output
Prediction : Chihuahua - score 57.17%
###Markdown
Median Smoothing 適用後の「敵対的サンプル」の推論と分類
###Code
# 推論
Y_hat_adv_ms = model.predict(np.expand_dims(preprocess_input(adv_ms_image.copy()), 0))
# 分類
adv_ms_class, adv_ms_name, adv_ms_score = get_top_pred(Y_hat_adv_ms)
# 分類結果とスコアを表示
print('Prediction: {0} - score {1:.2f}%'.format(adv_ms_name, adv_ms_score * 100))
###Output
Prediction: Chihuahua - score 54.43%
###Markdown
Median Smoothing 適用前後の「オリジナル画像」の表示
###Code
plt.figure(figsize=(14, 14))
# オリジナル画像を表示
plt.subplot(1, 2, 1)
plt.axis('off')
plt.title('Original\n {0} - {1:.2f}%'.format(original_name, original_score * 100))
plt.imshow(original_image/255.0)
# Median Smoothing 適用後のオリジナル画像を表示
plt.subplot(1, 2, 2)
plt.axis('off')
plt.title('Original after MS\n {0} - {1:.2f}%'.format(original_ms_name, original_ms_score * 100))
plt.imshow(original_ms_image/255.0)
###Output
_____no_output_____
###Markdown
Median Smoothing 適用前後の「敵対的サンプル」の表示
###Code
plt.figure(figsize=(14, 14))
# 敵対的サンプルを表示
plt.subplot(1, 2, 1)
plt.axis('off')
plt.title('Adversarial\n {0} - {1:.2f}%'.format(adv_name, adv_score * 100))
plt.imshow(adv_image/255.0)
# Median Smoothing 適用後の敵対的サンプルを表示
plt.subplot(1, 2, 2)
plt.axis('off')
plt.title('Adversarial after MS\n {0} - {1:.2f}%'.format(adv_ms_name, adv_ms_score * 100))
plt.imshow(adv_ms_image/255.0)
###Output
_____no_output_____
###Markdown
3. 攻撃検知のイメージ 攻撃検知用の関数
###Code
def anomaly_detection(non_preprocess_class, preprocessed_class):
if non_preprocess_class == preprocessed_class:
print('Normal.')
else:
print('Anomaly.')
###Output
_____no_output_____
###Markdown
正常ケース
###Code
anomaly_detection(original_class, original_ms_class)
###Output
Normal.
###Markdown
異常ケース
###Code
anomaly_detection(adv_class, adv_ms_class)
###Output
Anomaly.
|
conv2D_experiment.ipynb | ###Markdown
MNIST Digits
###Code
Gxx_fcn = x_train.reshape(N_tr,-1) @ x_train.reshape(N_tr,-1).T / x_train.reshape(N_tr,-1).shape[1]
Gxx = np.moveaxis(np.tensordot(x_train, x_train, (3, 3)), (3,2), (1,4)) ## Tensordot in channel axis
Gyy = y_train @ y_train.T / y_train.shape[1]
idx = 12
plt.imshow(x_train[idx].squeeze())
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.title('$Digit: 2$', fontsize=22)
plt.savefig('single_mnist_image.png', dpi=600)
plt.show()
plt.imshow(Gxx_fcn)
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.title('$G_{xx}$', fontsize=22)
plt.savefig('gxx_fcn.png', dpi=600)
plt.show()
plt.imshow(Gxx[idx,idx].reshape(resized**2,resized**2))
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.title('$G_{xx}$ (Tensor)', fontsize=22)
plt.savefig('gxx_cnn.png', dpi=600)
plt.show()
plt.imshow(Gyy)
plt.gca().get_xaxis().set_visible(False)
plt.gca().get_yaxis().set_visible(False)
plt.title('$G_{yy}$', fontsize=22)
plt.savefig('gyy_fcn.png', dpi=600)
plt.show()
###Output
_____no_output_____ |
docs/desc-0000-qp-photo-z_approximation/research/data_exploration.ipynb | ###Markdown
Exploring BPZ Test Data_Alex Malz (NYU) & Phil Marshall (SLAC)_In this notebook we develop machinery to evaluate our approximations on whole datasets in "survey mode."
###Code
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import hickle
import numpy as np
from pathos.multiprocessing import ProcessingPool as Pool
import random
import cProfile
import pstats
import StringIO
import timeit
import psutil
import sys
import os
import timeit
import pandas as pd
pd.set_option('display.max_columns', None)
import matplotlib.pyplot as plt
%matplotlib inline
import qp
from qp.utils import calculate_kl_divergence as make_kld
np.random.seed(seed=42)
random.seed(a=42)
###Output
_____no_output_____
###Markdown
Set-up, Ingest There are two datasets available:* $10^{5}$ LSST-like mock data provided by Sam Schmidt (UC Davis, LSST* $10^{4}$ Euclid-like mock data provided by Melissa Graham (UW, LSST)
###Code
# choose one of these:
# dataset_key = 'Euclid'# Melissa Graham's data
dataset_key = 'LSST'# Sam Schmidt's data
dataname = dataset_key
dataset_info = {}
dataset_info[dataset_key] = {}
###Output
_____no_output_____
###Markdown
Both datasets are fit with BPZ.
###Code
if dataset_key == 'Euclid':
datafilename = 'bpz_euclid_test_10_2.probs'
elif dataset_key == 'LSST':
datafilename = 'test_magscat_trainingfile_probs.out'
dataset_info[dataset_key]['filename'] = datafilename
###Output
_____no_output_____
###Markdown
The data files don't appear to come with information about the native format or metaparameters, but we are told they're evaluations on a regular grid of redshifts with given endpoints and number of parameters.
###Code
if dataset_key == 'Euclid':
z_low = 0.01
z_high = 3.51
elif dataset_key == 'LSST':
z_low = 0.005
z_high = 2.11
dataset_info[dataset_key]['z_lim'] = (z_low, z_high)
z_grid = np.arange(z_low, z_high, 0.01, dtype='float')
z_range = z_high - z_low
delta_z = z_range / len(z_grid)
dataset_info[dataset_key]['z_grid'] = z_grid
dataset_info[dataset_key]['delta_z'] = delta_z
###Output
_____no_output_____
###Markdown
Let's read in the catalog data. Note that it has a sizeable footprint even for a "small" number of galaxies.
###Code
## Warning: reading in the data is slow for Sam Schmidt's dataset!
with open(dataset_info[dataset_key]['filename'], 'rb') as data_file:
lines = (line.split(None) for line in data_file)
lines.next()
pdfs = np.array([[float(line[k]) for k in range(1,len(line))] for line in lines])
# dataset_info[dataset_key]['native_pdfs'] = pdfs
print('storage footprint '+str(sys.getsizeof(pdfs))+' bytes')
###Output
_____no_output_____
###Markdown
Visualizing the BPZ $p(z)$'sLet's plot a few interesting PDFs from the dataset.
###Code
# colors = ['red','green','blue','cyan','magenta','yellow']
# n_plot = len(colors)
# # if dataset_key == 'mg':
# # indices = [1, 3, 14, 16, 19, 21]
# # elif dataset_key == 'ss':
# n_gals_tot = len(pdfs)
# full_gal_range = range(n_gals_tot)
# indices = np.random.choice(full_gal_range, n_plot)
# for i in range(n_plot):
# plt.plot(dataset_info[dataset_key]['z_grid'], pdfs[indices[i]],
# color=colors[i], label=dataset_key+' #'+str(indices[i]))
# plt.xlabel(r'$z$', fontsize=16)
# plt.ylabel(r'$p(z)$', fontsize=16)
# plt.title(dataset_key+' mock catalog')
# plt.legend()
# plt.savefig('pz_placeholder_'+dataset_key+'.pdf', dpi=250)
###Output
_____no_output_____
###Markdown
Note: BPZ PDFs are not properly normalized. In order to be true PDFs, we want $\int_{-\infty}^{\infty} p(z) dz = 1$, but the data file entries satisfy $\sum _{z=z_min}^{z_{max}} p(z) = 1$, which is not in general the same. `qp` approximates the desired integral as $1 = \int p(z) dz \approx \Delta_{z} \sum_{z=z_{min}}^{z_{max}} p(z)$ where $\Delta_{z} = \frac{z_{max} - z_{min}}{N_{ff}}$, where the native format PDF is evaluated at $N_{ff}$ redshifts. Approximating the BPZ $p(z)'s$ Let's pick out a galaxy with an interesting $p(z)$ to turn into a `qp.PDF` object initialized with a gridded parametrization.
###Code
if dataset_key == 'Euclid':
chosen = 5390
elif dataset_key == 'LSST':
# chosen = 108019
indices = [ 12543, 52661, 46216, 53296, 95524, 84574 , 2607 ,56017 , 64794, 7600]
chosen = indices[9]
start_time = timeit.default_timer()
G = qp.PDF(gridded=(dataset_info[dataset_key]['z_grid'], pdfs[chosen]))
print(timeit.default_timer() - start_time)
G.plot()
###Output
_____no_output_____
###Markdown
`qp` cannot currently convert gridded PDFs to histograms or quantiles - we need to make a GMM first, and use this to instantiate a `qp.PDF` object using a `qp.composite` object based on that GMM as `qp.PDF.truth`. The number of parameters necessary for a qualitatively good fit depends on the characteristics of the dataset.
###Code
if dataset_key == 'Euclid':
nc_needed = 3
elif dataset_key == 'LSST':
nc_needed = 5
dataset_info[dataset_key]['N_GMM'] = nc_needed
###Output
_____no_output_____
###Markdown
We can fit a GMM directly to the gridded PDF (via an internal interpolation). The direct fit, however, is not guaranteed to converge, particularly if the underlying distribution is not actually well-described by a weighted sum of Gaussians -- this is why storing the GMM parameters instead of a non-parametric format can be dangerous.
###Code
start_time = timeit.default_timer()
G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='gridded', vb=True)
time = timeit.default_timer() - start_time
print(str(time)+' for GMM fit to gridded')
G.plot()
###Output
_____no_output_____
###Markdown
The alternative is to take a large number of samples and fit a GMM to those (via the same internal interpolation). We can check that the fits are very similar. Though it is slower, we will sample before fitting to guarantee convergence.
###Code
high_res = 1000
start_time = timeit.default_timer()
G.sample(high_res, vb=False)
G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='samples', vb=True)
time = timeit.default_timer() - start_time
print(str(time)+' for GMM fit to samples')
G.plot()
###Output
_____no_output_____
###Markdown
The `qp.composite` object can be used as the `qp.PDF.truth` to initialize a new `qp.PDF` object that doesn't have any information about the gridded or sample approximations but has a qualitatively similar shape and is thus "realistically complex" enough to draw conclusions about real data. Now we can approximate it any way we like! Consider this example for $N_f=7$ parameters.
###Code
N_f = 7
M = qp.PDF(truth=G.mix_mod, limits=dataset_info[dataset_key]['z_lim'])
M.quantize(N=N_f, vb=False)
M.histogramize(N=N_f, binrange=dataset_info[dataset_key]['z_lim'], vb=False)
M.sample(N=N_f, using='truth', vb=False)
M.plot(loc=dataset_key+'_example_pz.pdf', vb=True)
###Output
_____no_output_____
###Markdown
Quantifying the Accuracy of the ApproximationWe can also calculate the KLD metric on this `qp.PDF`. The KLD quantifies the information loss of an approximation of a PDF relative to the true PDF in units of nats. Thus, a lower KLD corresponds to more information being preserved in the approximation.
###Code
formats = ['quantiles', 'histogram', 'samples']
parametrizations = {}
for f in formats:
parametrizations[f] = {}
for ff in formats:
parametrizations[f][ff] = None
parametrizations['quantiles']['quantiles'] = M.quantiles
parametrizations['histogram']['histogram'] = M.histogram
parametrizations['samples']['samples'] = M.samples
dataset_info[dataset_key]['inits'] = parametrizations
klds = {}
P = qp.PDF(truth=M.truth)
for f in formats:
Q = qp.PDF(quantiles=dataset_info[dataset_key]['inits'][f]['quantiles'],
histogram=dataset_info[dataset_key]['inits'][f]['histogram'],
samples=dataset_info[dataset_key]['inits'][f]['samples'])
klds[f] = make_kld(P, Q)
print(klds)
###Output
_____no_output_____
###Markdown
Survey ModeWe want to compare parametrizations for large catalogs, so we'll need to be more efficient. The `qp.Ensemble` object is a wrapper for `qp.PDF` objects enabling conversions to be performed and metrics to be calculated in parallel. We'll experiment on a subsample of 100 galaxies.
###Code
n_gals_tot = len(pdfs)
n_gals_use = 100
full_gal_range = range(n_gals_tot)
subset = np.random.choice(full_gal_range, n_gals_use)
pdfs_use = pdfs[subset]
# using the same grid for output as the native format, but doesn't need to be so
dataset_info[dataset_key]['in_z_grid'] = dataset_info[dataset_key]['z_grid']
dataset_info[dataset_key]['metric_z_grid'] = dataset_info[dataset_key]['z_grid']
n_floats_use = 10
if dataset_key == 'Euclid':
dataset_info[dataset_key]['N_GMM'] = 3
elif dataset_key == 'LSST':
dataset_info[dataset_key]['N_GMM'] = 5
fit_components = dataset_info[dataset_key]['N_GMM']
n_moments_use = 3
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
###Output
_____no_output_____
###Markdown
We'll start by reading in our catalog of gridded PDFs, sampling them, fitting GMMs to the samples, and establishing a new `qp.Ensemble` object where each meber `qp.PDF` object has `qp.PDF.truth`$\neq$`None`.
###Code
def setup_from_grid(in_pdfs, z_grid, N_comps, high_res=1000):
#read in the data, happens to be gridded
zlim = (min(z_grid), max(z_grid))
N_pdfs = len(in_pdfs)
# plot_examples(N_pdfs, z_grid, pdfs)
print('making the initial ensemble of '+str(N_pdfs)+' PDFs')
E0 = qp.Ensemble(N_pdfs, gridded=(z_grid, in_pdfs), vb=True)
print('made the initial ensemble of '+str(N_pdfs)+' PDFs')
#fit GMMs to gridded pdfs based on samples (faster than fitting to gridded)
print('sampling for the GMM fit')
samparr = E0.sample(high_res, vb=False)
print('took '+str(high_res)+' samples')
print('making a new ensemble from samples')
Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False)
print('made a new ensemble from samples')
print('fitting the GMM to samples')
GMMs = Ei.mix_mod_fit(comps=N_comps, vb=False)
print('fit the GMM to samples')
#set the GMMS as the truth
print('making the final ensemble')
Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False)
print('made the final ensemble')
return(Ef)
# return
def plot_examples(z_grid, pdfs, n_plot=6):
N_pdfs =len(pdfs)
randos = np.random.choice(range(N_pdfs), n_plot)
for i in range(n_plot):
plt.plot(z_grid, pdfs[randos[i]], label=dataset_key+r'\#'+str(randos[i]))
plt.xlabel(r'$z$', fontsize=16)
plt.ylabel(r'$p(z)$', fontsize=16)
plt.title(dataset_key+' mock catalog')
plt.legend()
plt.savefig('pz_placeholder_'+dataset_key+'.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
catalog = setup_from_grid(pdfs_use, dataset_info[dataset_key]['in_z_grid'],
fit_components)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
plot_examples(dataset_info[dataset_key]['in_z_grid'], pdfs_use, n_plot=6)
###Output
_____no_output_____
###Markdown
Next, we compute the KLD between each approximation and the truth for every member of the ensemble. We make the `qp.Ensemble.kld` into a `qp.PDF` object of its own to compare the moments of the KLD distributions for different parametrizations.
###Code
def analyze_individual(E, z_grid, N_floats, N_moments=4):
zlim = (min(z_grid), max(z_grid))
z_range = zlim[-1] - zlim[0]
delta_z = z_range / len(z_grid)
Eq, Eh, Es = E, E, E
inits = {}
for f in formats:
inits[f] = {}
for ff in formats:
inits[f][ff] = None
print('performing quantization')
inits['quantiles']['quantiles'] = Eq.quantize(N=N_floats, vb=False)
print('performing histogramization')
inits['histogram']['histogram'] = Eh.histogramize(N=N_floats, binrange=zlim, vb=False)
print('performing sampling')
inits['samples']['samples'] = Es.sample(samps=N_floats, vb=False)
print('making the approximate ensembles')
Eo ={}
for f in formats:
Eo[f] = qp.Ensemble(E.n_pdfs, truth=E.truth,
quantiles=inits[f]['quantiles'],
histogram=inits[f]['histogram'],
samples=inits[f]['samples'])
print('made the approximate ensembles')
print('calculating the individual metrics')
klds = {}
metrics = {}
moments = {}
for key in Eo.keys():
print('starting '+key)
klds[key] = Eo[key].kld(using=key, limits=zlim, dx=delta_z)
samp_metric = qp.PDF(samples=klds[key])
gmm_metric = samp_metric.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='samples')
metrics[key] = qp.PDF(truth=gmm_metric)
moments[key] = []
for n in range(N_moments+1):
moments[key].append([qp.utils.calculate_moment(metrics[key], n,
using=key,
limits=zlim,
dx=delta_z,
vb=False)])
print('finished with '+key)
print('calculated the individual metrics')
# plot_individual(klds, N_floats)
return(Eo, klds, moments)
def plot_individual(pz_klds, N_floats):
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
plot_bins = np.linspace(-3., 3., 20)
for key in pz_klds.keys():
plt.hist(np.log(pz_klds[key]), color=colors[key], alpha=0.5,
label=key, normed=True, bins=plot_bins)
plt.legend()
plt.ylabel('frequency')
plt.xlabel(r'$\log[KLD]$')
plt.title(dataset_key+r' dataset with $N_{f}='+str(N_floats)+r'$')
plt.savefig(dataset_key+'_metric_histogram_placeholder.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
(ensembles, pz_klds, metric_moments) = analyze_individual(catalog,
dataset_info[dataset_key]['metric_z_grid'],
n_floats_use,
n_moments_use)
dataset_info[dataset_key]['pz_klds'] = pz_klds
dataset_info[dataset_key]['pz_kld_moments'] = metric_moments
plot_individual(pz_klds, n_floats_use)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
###Output
_____no_output_____
###Markdown
Finally, we calculate metrics on the stacked estimator $\hat{n}(z)$ that is the average of all members of the ensemble.
###Code
def analyze_stacked(E0, E, z_grid):
zlim = (min(z_grid), max(z_grid))
z_range = zlim[-1] - zlim[0]
delta_z = z_range / len(z_grid)
parametrizations = E.keys()
print('stacking the ensembles')
stacked_pdfs = {}
for key in formats:
stacked_pdfs[key] = qp.PDF(gridded=E[key].stack(z_grid, using=key,
vb=False)[key])
stacked_pdfs['truth'] = qp.PDF(gridded=E0.stack(z_grid, using='truth',
vb=False)['truth'])
print('stacked the ensembles')
print('calculating the metrics')
klds = {}
for key in parametrizations:
klds[key] = qp.utils.calculate_kl_divergence(stacked_pdfs['truth'],
stacked_pdfs[key],
limits=zlim, dx=delta_z)
print('calculated the metrics')
# plot_estimators(z_grid, stacked_pdfs, klds)
return(stacked_pdfs, klds)
def plot_estimators(z_grid, stacked_pdfs, klds):
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers')
plt.plot(z_grid, stacked_pdfs['truth'].evaluate(z_grid, vb=False)[1], color='black', lw=4, alpha=0.3, label='truth')
for key in formats:
plt.plot(z_grid, stacked_pdfs[key].evaluate(z_grid, vb=False)[1], label=key+' KLD='+str(klds[key]), color=colors[key])
plt.xlabel(r'$z$')
plt.ylabel(r'$\hat{n}(z)$')
plt.legend()
plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers')
plt.savefig(dataset_key+'_nz_comparison.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
(stack_evals, nz_klds) = analyze_stacked(catalog, ensembles, dataset_info[dataset_key]['metric_z_grid'])
dataset_info[dataset_key]['nz_ests'] = stack_evals
dataset_info[dataset_key]['nz_klds'] = nz_klds
plot_estimators(dataset_info[dataset_key]['metric_z_grid'], stack_evals, nz_klds)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
###Output
_____no_output_____
###Markdown
We save the data so we can remake the plots later without running everything again. ScalingWe'd like to do this for many values of $N_{f}$ as well as larger catalog subsamples, repeating the analysis many times to establish error bars on the KLD as a function of format, $N_{f}$, and dataset. The things we want to plot across multiple datasets/number of parametes are:1. KLD of stacked estimator, i.e. `N_f` vs. `nz_output[dataset][format][instantiation][KLD_val_for_N_f]`2. moments of KLD of individual PDFs, i.e. `n_moment, N_f` vs. `pz_output[dataset][format][n_moment][instantiation][moment_val_for_N_f]`So, we ned to make sure these are saved!
###Code
if os.path.exists('nz_metrics.hkl'):
with open('nz_metrics.hkl', 'r') as nz_file:
#read in content of list/dict
nz_stats = hickle.load(nz_file)
else:
nz_stats = {}
nz_stats['N_f'] = []
if N_f not in nz_stats['N_f']:
nz_stats['N_f'].append(N_f)
where_N_f = nz_stats['N_f'].index(N_f)
if dataset_key not in nz_stats.keys():
nz_stats[dataset_key] = {}
for f in parametrizations:#change this name to formats
nz_stats[dataset_key][f] = [[]]
for f in parametrizations:
nz_stats[dataset_key][f][where_N_f].append(dataset_info[dataset_key]['nz_klds'][f])
with open('nz_metrics.hkl', 'w') as nz_file:
hickle.dump(nz_stats, nz_file)
###Output
_____no_output_____
###Markdown
We want to plot the KLD on $\hat{n}(z)$ for all formats as $N_{f}$ changes. We want to repeat this for many subsamples of the catalog to establush error bars on the KLD values.
###Code
with open('nz_metrics.hkl', 'r') as nz_file:
nz_stats = hickle.load(nz_file)
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
# need to get some version of this working from nz_klds
plt.figure(figsize=(5, 5))
for f in parametrizations.keys():
data_arr = np.swapaxes(np.array(nz_stats[dataset_key][f]), 0, 1)#turn N_f * instantiations into instantiations * N_f
n_i = len(data_arr)
a = 1./n_i
plt.plot([2 * max(nz_stats['N_f']), 2 * max(nz_stats['N_f'])], [1., 10.], color=colors[f], alpha=a, label=f)
for i in data_arr:
# will be regular plot not scatter with more N_f options
plt.plot(nz_stats['N_f'], i[0], color=colors[f], alpha=a)
plt.semilogy()
plt.semilogx()
plt.xlim(min(nz_stats['N_f'])-1, max(nz_stats['N_f'])+1)
plt.ylim(1., 10.)
plt.xlabel(r'number of parameters')
plt.ylabel(r'KLD')
plt.legend()
plt.title(r'$\hat{n}(z)$ KLD on '+str(n_gals_use)+' from '+dataset_key)
plt.savefig(dataset_key+'_nz_metrics_placeholder.png', dpi=250)
# won't really know how this looks without more N_f tested
###Output
_____no_output_____
###Markdown
We want to plot the moments of the KLD distribution for each format as $N_{f}$ changes.
###Code
if os.path.exists('pz_metrics.hkl'):
with open('pz_metrics.hkl', 'r') as pz_file:
#read in content of list/dict
pz_stats = hickle.load(pz_file)
else:
pz_stats = {}
pz_stats['N_f'] = []
if N_f not in pz_stats['N_f']:
pz_stats['N_f'].append(N_f)
where_N_f = pz_stats['N_f'].index(N_f)
if dataset_key not in pz_stats.keys():
pz_stats[dataset_key] = {}
for f in parametrizations:#change this name to formats
pz_stats[dataset_key][f] = []
for m in range(n_moments_use + 1):
pz_stats[dataset_key][f].append([[]])
if N_f not in pz_stats['N_f']:
pz_stats[dataset_key][f][m].append([])
for f in parametrizations:
for m in range(n_moments_use + 1):
pz_stats[dataset_key][f][m][where_N_f].append(dataset_info[dataset_key]['pz_kld_moments'][f][m])
with open('pz_metrics.hkl', 'w') as pz_file:
hickle.dump(pz_stats, pz_file)
with open('pz_metrics.hkl', 'r') as pz_file:
pz_stats = hickle.load(pz_file)
def make_patch_spines_invisible(ax):
ax.set_frame_on(True)
ax.patch.set_visible(False)
for sp in ax.spines.values():
sp.set_visible(False)
shapes = ['o','+','x','v','^','<','>']
fig, ax = plt.subplots()
fig.subplots_adjust(right=1.)
ax_n = ax
for key in parametrizations.keys():
ax_n.plot([-1], [0], color=colors[key], label=key)
for n in range(1, 4):
ax.scatter([-1], [0], color='k', marker=shapes[n-1], label='moment '+str(n))
if n>1:
ax_n = ax.twinx()
if n>2:
ax_n.spines["right"].set_position(("axes", 1. + 0.1 * (n-1)))
make_patch_spines_invisible(ax_n)
ax_n.spines["right"].set_visible(True)
for f in parametrizations.keys():
data_arr = np.swapaxes(np.array(pz_stats[dataset_key][f][n]), 0, 1)
n_i = len(data_arr)
a = 1./n_i
for i in data_arr:
ax_n.scatter(pz_stats['N_f'], i, marker=shapes[n-1], color=colors[f], alpha=a)
ax_n.set_ylabel('moment '+str(n))
ax.set_xlim(1,1000)#should be N_f range and logged
ax.semilogx()
ax.set_xlabel('number of parameters')
ax.legend()
fig.suptitle('KLD moments on '+str(n_gals_use)+' from '+dataset_key)
fig.savefig(dataset_key+'_pz_metrics_placeholder.png', dpi=250)
###Output
_____no_output_____
###Markdown
Okay, now all I have to do is have this loop over both datasets, number of galaxies, and number of floats! Everything after here is scratch. That's all, folks!
###Code
## everything works above here! now it's time to make plots from this output!
# # Function to test the experimental qp.Ensemble object!
# def analyze():#(pdfs, N_comps, z, N_floats):
# #read in the data, happens to be gridded
# z_low, z_high = min(z), max(z)
# N_pdfs = len(pdfs)
# out_E = {}
# E0 = qp.Ensemble(N_pdfs, gridded=(z, pdfs), vb=False)
# #fit gridded pdfs as GMMs based on samples
# samparr = E0.sample(1000, vb=False)
# print(np.shape(samparr))
# Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False)
# GMMs = Ei.mix_mod_fit(comps=N_comps, using='samples', vb=False)
# # out_E['GMMs'] = []
# # for GMM in GMMs:
# # out_E['GMMs'].append(GMM.functions[0].stats())
# #set the GMMS as the truth
# Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False)
# #stack them and save the output
# out_E['truth'] = Ef.stack(z, using='mix_mod', vb=False)
# # #evaluate as gridded and save the output
# # Et = qp.Ensemble(N_pdfs, gridded=Ef.evaluate(z))
# # out_E['gridded'] = Et.stack(z, using='gridded')
# #evaluate as quantiles and save the output
# Eq = qp.Ensemble(N_pdfs, quantiles=Ef.quantize(N=N_floats), vb=False)
# #q_stack = Eq.stack(z, using='quantiles')
# out_E['quantiles'] = Eq.stack(z, using='quantiles', vb=False)
# # #evaluate as histogram and save the output
# # Eh = qp.Ensemble(N_pdfs, histogram=Ef.histogramize(N=N_floats, binrange=(z_low, z_high)))
# # #h_stack = Eh.stack(z, using='histogram')
# # out_E['histogram'] = Eh.stack(z, using='histogram')
# # #evaluate as samples and save the output
# # Es = qp.Ensemble(N_pdfs, samples=Ef.sample(samps=N_floats))
# # #s_stack = Es.stack(z, using='samples')
# # out_E['samples'] = Es.stack(z, using='samples')
# return(out_E)#, KLDs, RMSEs)
###Output
_____no_output_____
###Markdown
Let's run a test with 100 galaxies and 10 parameters. This should take about 5 minutes or so.
###Code
# print(n_gals_use, n_floats_use, s.getvalue())
###Output
_____no_output_____
###Markdown
Let's show the stacked versions and compute metrics.
###Code
# print(results.keys())
# print(results['truth']['mix_mod'])
# KLDs, RMSEs = {}, {}
# P = qp.PDF(gridded=results['truth']['mix_mod'])
# metric_keys = results.keys()
# metric_keys.remove('truth')
# for est in metric_keys:
# Q = qp.PDF(gridded=results[est][est])
# KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False)
# RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False)
# plt.plot(results[est][est][0], results[est][est][1], label=est)
# plt.legend()
# print(KLDs, RMSEs)
###Output
_____no_output_____
###Markdown
Things are quite broken after this point!
###Code
# P = qp.PDF(gridded=stack_ests['truth'])
# KLDs, RMSEs = {}, {}
# for est in .keys():
# Q = qp.PDF(gridded=stack_ests[est])
# KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False)
# RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False)
###Output
_____no_output_____
###Markdown
Let's plot the log standard deviations of the first component of the mixture models.
###Code
# moments = np.array(results['stats']).T
# fit_stats = moments[1]
# plt.hist(np.log(fit_stats))
###Output
_____no_output_____
###Markdown
Let's check the distribution of standard deviations of the ensemble.
###Code
# D = qp.PDF(samples = np.log(fit_stats))
# T = D.mix_mod_fit(n_components=1)
# D.plot()
# print(np.exp(T.functions[0].stats()))
###Output
_____no_output_____
###Markdown
Now enough of the `qp.Ensemble` functionality has been implemented to merge into the `master` branch!
###Code
# this ends the test of the experimental qp.Ensemble object
# you may now return to your regularly scheduled programming
# def analyze_one(index, N_comps, z, N_floats, logfilename='logfile.txt', vb=False):
# """
# Model the input BPZ P(z) as a GMM, approximate that GMM in
# various ways, and assess the quality of each approximation.
# Parameters
# ----------
# index : int
# ID of galaxy
# N_comps : int
# Number of components used in GMM
# N_floats : int
# Number of floats used to parametrize the P(z)
# z : float, ndarr
# Redshift array for input gridded "truth". Used for
# evaluating n(z) too
# logfilename: string
# where to put logging information
# vb : boolean
# Verbose output?
# Returns
# -------
# result : dict
# Dictionary containing metric values, n(z) on standard
# grid, samples, "true" GMM gridded p(z).
# Notes
# -----
# In some cases the GMM does not fit well, leading to bad KLD and
# RMSE values when it is compared to the truth.
# """
# # # Make z array if we don't already have it:
# # if z is None:
# # z = np.arange(0.01, 3.51, 0.01, dtype='float')
# dz = (max(z) - min(z)) / len(z)
# zlimits = [min(z), max(z)]
# # Make a dictionary to contain the results:
# result = {}
# # Make a GMM model of the input BPZ p(z) (which are stored
# # in the global 'pdfs' variable:
# G = qp.PDF(gridded=(z, pdfs[index]), vb=vb)
# # Draw 1000 samples, fit a GMM model to them, and make a true PDF:
# G.sample(1000, vb=vb)
# GMM = G.mix_mod_fit(n_components=N_comps, vb=vb)
# P = qp.PDF(truth=GMM, vb=vb)
# # Evaluate the GMM on the z grid, and store in the result dictionary. We'll
# # need this to make our "true" n(z) estimator. We don't need to keep the
# # z array, as we passed that in.
# result['truth'] = P.evaluate(z, using='truth', vb=vb)[1]
# # Now approximate P in various ways, and assess:
# Q, KLD, RMSE, approximation = {}, {}, {}, {}
# Q['quantiles'] = qp.PDF(quantiles=P.quantize(N=N_floats, vb=vb), vb=vb)
# Q['histogram'] = qp.PDF(histogram=P.histogramize(N=N_floats, binrange=zlimits, vb=vb), vb=vb)
# Q['samples'] = qp.PDF(samples=P.sample(N=N_floats, vb=vb), vb=vb)
# for k in Q.keys():
# KLD[k] = qp.calculate_kl_divergence(P, Q[k], limits=zlimits, dx=dz, vb=vb)
# RMSE[k] = qp.calculate_rmse(P, Q[k], limits=zlimits, dx=dz, vb=vb)
# approximation[k] = Q[k].evaluate(z, using=k, vb=vb)[1]
# # Store approximations:
# result['KLD'] = KLD
# result['RMSE'] = RMSE
# result['approximation'] = approximation
# result['samples'] = Q['samples'].samples
# with open(logfilename, 'a') as logfile:
# logfile.write(str((index, timeit.default_timer() - start_time))+'\n')
# return result
###Output
_____no_output_____
###Markdown
OK, now lets's collate the metrics for the first 100 galaxies over a variable number of parameters, and look at the distribution of metric values. We're using multiprocessing because the `for` loop is slow; the rate-limiting step is the optimization routine for finding quantiles of a GMM.
###Code
# def one_analysis(N):
# all_results[str(N)] = []
# pr = cProfile.Profile()
# pr.enable()
# # with qp.Ensemble
# n_gals_tot = len(pdfs)
# full_gal_range = range(n_gals_tot)
# subset = np.random.choice(full_gal_range, n_gals)
# pdfs_use = pdfs[subset]
# all_results[str(N)] = analyze(pdfs_use, nc_needed, z, N)
# # # if multiprocessing:
# # logfilename = dataname + str(n_gals) + 'multi' + str(N)+'.txt'
# # def help_analyze(i):
# # return analyze_one(i, nc_needed, z, N, logfilename=logfilename)
# # pool = Pool(psutil.cpu_count() - 1)
# # results = pool.map(help_analyze, range(n_gals))
# # all_results[str(N)] = results
# # # tl;dr Tmax=270s for N_floats=3, 100 galaxies, 3 processors
# # # if looping:
# # logfilename = dataname + str(n_gals) + 'loop' + str(N)+'.txt'
# # for i in range(100):
# # all_results[str(N)].append(analyze_one(i, 2, z, N, logfilename=logfilename))
# # if i%10 == 0: print('.', end='')
# # # tl;dr Tmax=352s for N_floats=3, 100 galaxies
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(N, s.getvalue())
# return
# #%%time
# float_numbers = [3]#, 10, 30, 100]
# n_float_numbers = len(float_numbers)
# # gal_numbers = [100]#, 1000, 10000]
# # n_gal_numbers = len(gal_numbers)
# # total_results ={}
# # for M in gal_numbers:
# # n_gals = M
# n_gals = 100
# all_results = {}
# for N in float_numbers:
# start_time = timeit.default_timer()
# one_analysis(N)
# # total_results[str(n_gals)] = all_results
###Output
_____no_output_____
###Markdown
Since the previous step is quite slow (on the order of 5 minutes per test of different numbers of parameters for my laptop), this is a good point to save the results. We can load them from the file later and not remake them if we only want to do the rest of the analysis.
###Code
# with open('all_results.hkl', 'w') as result_file:
# hickle.dump(all_results, result_file)
# with open('all_results.hkl', 'r') as result_file:
# all_results = hickle.load(result_file)
# all_results = total_results[str(gal_numbers[0])]
# all_KLD, all_RMSE = [], []
# for n in range(n_float_numbers):
# KLD, RMSE = {}, {}
# for approximation in all_results[str(float_numbers[n])][0]['KLD'].keys():
# x = np.array([])
# for k in range(len(all_results[str(float_numbers[n])])):
# x = np.append(x, all_results[str(float_numbers[n])][k]['KLD'][approximation])
# KLD[approximation] = x
# x = np.array([])
# for k in range(len(all_results[str(float_numbers[n])])):
# x = np.append(x, all_results[str(float_numbers[n])][k]['RMSE'][approximation])
# RMSE[approximation] = x
# all_KLD.append(KLD)
# all_RMSE.append(RMSE)
###Output
_____no_output_____
###Markdown
Now let's plot histograms of the metric values.
###Code
# colors = {'samples':'green', 'quantiles':'blue', 'histogram':'red'}
# plt.figure(figsize=(12, 5 * n_float_numbers))
# i=0
# for n in range(n_float_numbers):
# i += 1
# # Lefthand panel: KLD
# plt.subplot(n_float_numbers, 2, i)
# plt.title('KLD for '+str(float_numbers[n])+' stored numbers')
# bins = np.linspace(0.0, 5., 25)
# for k in ['samples', 'quantiles', 'histogram']:
# plt.hist(all_KLD[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True)
# #plt.semilogx()
# plt.xlabel('KL Divergence Metric', fontsize=16)
# plt.ylim(0., 5.0)
# plt.xlim(0., 5.0)
# plt.legend()
# i += 1
# # Righthand panel: RMSE
# plt.subplot(n_float_numbers, 2, i)#+n_numbers)
# plt.title('RMSE for '+str(float_numbers[n])+' stored numbers')
# bins = np.linspace(0.0, 5., 25)
# for k in ['samples', 'quantiles', 'histogram']:
# plt.hist(all_RMSE[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True)
# #plt.semilogx()
# plt.xlabel('RMS Error Metric', fontsize=16)
# plt.ylim(0., 5.0)
# plt.xlim(0., 5.0)
# plt.legend();
# plt.savefig('money.png')
###Output
_____no_output_____
###Markdown
Interestingly, the metrics don't agree, nor is the behavior consistent across different numbers of parameters. However, as the number of parameters increases, the distribution of the metrics converge to lower numbers.KLD seems to flag more "bad" approximations than RMSE. How do we know where to set the threshold in each metric? We should think of the right way to get a summary statistic (first moment?) on the ensemble of KLD or RMSE values so we can make the plot of number of parameters vs. quality of approximation. Now lets compute the estimated $n(z)$. We'll do this with the GMM "truth", and then using each of our approximations. And we'll normalize the $n(z)$ to account for lost systems with bad approximations.
###Code
# plt.figure(figsize=(6, 5 * n_float_numbers))
# all_n = []
# all_x = []
# all_y = []
# for i in range(n_float_numbers):
# results = all_results[str(float_numbers[i])]
# n = {}
# # Pull out all truths and compute the average at each z:
# x = np.zeros([len(z), len(results)])
# y = {}
# for approx in ['samples', 'quantiles', 'histogram']:
# y[approx] = np.zeros([len(z), len(results)])
# for k in range(len(results)):
# y[approx][:,k] = results[k]['approximation'][approx]
# for k in range(len(results)):
# x[:,k] = results[k]['truth']
# # Now do the averaging to make the estimators:
# n['truth'] = np.mean(x, axis=1)
# n['truth'] /= np.sum(n['truth']) * delta_z
# for approx in ['samples', 'quantiles', 'histogram']:
# n[approx] = np.mean(y[approx], axis=1)
# n[approx] /= np.sum(n[approx]) * delta_z
# all_n.append(n)
# all_x.append(x)
# all_y.append(y)
# # Note: this uses the samples' KDE to make the approximation. We could (and
# # should!) also try simply concatenating the samples and histogramming them.
# # Plot truth and all the approximations.
# # The NaNs in the histogram approximation make that unplottable for now.
# plt.subplot(n_float_numbers, 1, i+1)#+n_numbers)
# plt.title(r'$n(z)$ for '+str(float_numbers[i])+' numbers')
# plt.plot(z, n['truth'], color='black', lw=4, alpha=0.3, label='truth')
# for k in ['samples', 'quantiles', 'histogram']:
# plt.plot(z, n[k], label=k, color=colors[k])
# plt.xlabel('redshift z')
# plt.ylabel('n(z)')
# plt.legend();
# plt.savefig('nz_comparison.png', dpi=300)
###Output
_____no_output_____
###Markdown
The "samples" approximation gives the best result for the $n(z)$ estimator even with a small number of samples. However, once the number of parameters increases slightly, the "quantiles" approximation performs similarly. It takes a large number of parameters before the "histogram" approximation approaches the other options. Let's use the `qp.PDF` object to compare them quantitatively (since $n(z)$ can be normalized to give the global $p(z)$).
###Code
# all_p = []
# for i in range(n_float_numbers):
# n = all_n[i]
# p = {}
# for k in ['samples', 'quantiles', 'histogram']:
# p[k] = qp.PDF(gridded=(z,n[k]), vb=False)
# p['truth'] = qp.PDF(gridded=(z,n['truth']), vb=False)
# all_p.append(p)
# all_KLD_nz, all_RMSE_nz = {}, {}
# zlimits, dz = [z_low, z_high], 0.01
# for k in ['samples', 'quantiles', 'histogram']:
# p = all_p[i]
# KLD_nz, RMSE_nz = [], []
# for i in range(n_float_numbers):
# KLD_nz.append(qp.calculate_kl_divergence(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False))
# RMSE_nz.append(qp.calculate_rmse(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False))
# all_KLD_nz[k] = KLD_nz
# all_RMSE_nz[k] = RMSE_nz
# plt.figure(figsize=(12, 5))
# both = [plt.subplot(1, 2, i+1) for i in range(2)]
# KLD_plot = both[0]
# RMSE_plot = both[1]
# KLD_plot.set_title(r'KLD for $n(z)$')
# RMSE_plot.set_title(r'RMSE for $n(z)$')
# KLD_plot.set_xlabel('number of parameters')
# RMSE_plot.set_xlabel('number of parameters')
# KLD_plot.set_ylabel('KLD')
# RMSE_plot.set_ylabel('RMSE')
# # KLD_plot.semilogx()
# # KLD_plot.semilogy()
# # RMSE_plot.semilogx()
# # RMSE_plot.semilogy()
# for k in ['samples', 'quantiles', 'histogram']:
# KLD_plot.plot(float_numbers, all_KLD_nz[k], color=colors[k], label=k)
# RMSE_plot.plot(float_numbers, all_RMSE_nz[k], color=colors[k], label=k)
# KLD_plot.semilogy()
# KLD_plot.semilogx()
# RMSE_plot.semilogy()
# RMSE_plot.semilogx()
# KLD_plot.legend()
# RMSE_plot.legend()
# plt.savefig('summary.png')
# print('KLD metrics for n(z) estimator: ', all_KLD_nz)
# print('RMSE metrics for n(z) estimator: ', all_RMSE_nz)
###Output
_____no_output_____
###Markdown
Exploring BPZ Test Data_Alex Malz (NYU) & Phil Marshall (SLAC)_In this notebook we develop machinery to evaluate our approximations on whole datasets in "survey mode."
###Code
%load_ext autoreload
%autoreload 2
from __future__ import print_function
import hickle
import numpy as np
from pathos.multiprocessing import ProcessingPool as Pool
import random
import cProfile
import pstats
import StringIO
import timeit
import psutil
import sys
import os
import timeit
import pandas as pd
pd.set_option('display.max_columns', None)
import matplotlib.pyplot as plt
%matplotlib inline
import qp
from qp.utils import calculate_kl_divergence as make_kld
np.random.seed(seed=42)
random.seed(a=42)
###Output
_____no_output_____
###Markdown
Set-up, Ingest There are two datasets available:* $10^{5}$ LSST-like mock data provided by Sam Schmidt (UC Davis, LSST* $10^{4}$ Euclid-like mock data provided by Melissa Graham (UW, LSST)
###Code
# choose one of these:
# dataset_key = 'Euclid'# Melissa Graham's data
dataset_key = 'LSST'# Sam Schmidt's data
dataname = dataset_key
dataset_info = {}
dataset_info[dataset_key] = {}
###Output
_____no_output_____
###Markdown
Both datasets are fit with BPZ.
###Code
if dataset_key == 'Euclid':
datafilename = 'bpz_euclid_test_10_2.probs'
elif dataset_key == 'LSST':
datafilename = 'test_magscat_trainingfile_probs.out'
dataset_info[dataset_key]['filename'] = datafilename
###Output
_____no_output_____
###Markdown
The data files don't appear to come with information about the native format or metaparameters, but we are told they're evaluations on a regular grid of redshifts with given endpoints and number of parameters.
###Code
if dataset_key == 'Euclid':
z_low = 0.01
z_high = 3.51
elif dataset_key == 'LSST':
z_low = 0.005
z_high = 2.11
dataset_info[dataset_key]['z_lim'] = (z_low, z_high)
z_grid = np.arange(z_low, z_high, 0.01, dtype='float')
z_range = z_high - z_low
delta_z = z_range / len(z_grid)
dataset_info[dataset_key]['z_grid'] = z_grid
dataset_info[dataset_key]['delta_z'] = delta_z
###Output
_____no_output_____
###Markdown
Let's read in the catalog data. Note that it has a sizeable footprint even for a "small" number of galaxies.
###Code
## Warning: reading in the data is slow for Sam Schmidt's dataset!
with open(dataset_info[dataset_key]['filename'], 'rb') as data_file:
lines = (line.split(None) for line in data_file)
lines.next()
pdfs = np.array([[float(line[k]) for k in range(1,len(line))] for line in lines])
# dataset_info[dataset_key]['native_pdfs'] = pdfs
print('storage footprint '+str(sys.getsizeof(pdfs))+' bytes')
###Output
_____no_output_____
###Markdown
Visualizing the BPZ $p(z)$'sLet's plot a few interesting PDFs from the dataset.
###Code
# colors = ['red','green','blue','cyan','magenta','yellow']
# n_plot = len(colors)
# # if dataset_key == 'mg':
# # indices = [1, 3, 14, 16, 19, 21]
# # elif dataset_key == 'ss':
# n_gals_tot = len(pdfs)
# full_gal_range = range(n_gals_tot)
# indices = np.random.choice(full_gal_range, n_plot)
# for i in range(n_plot):
# plt.plot(dataset_info[dataset_key]['z_grid'], pdfs[indices[i]],
# color=colors[i], label=dataset_key+' #'+str(indices[i]))
# plt.xlabel(r'$z$', fontsize=16)
# plt.ylabel(r'$p(z)$', fontsize=16)
# plt.title(dataset_key+' mock catalog')
# plt.legend()
# plt.savefig('pz_placeholder_'+dataset_key+'.pdf', dpi=250)
###Output
_____no_output_____
###Markdown
Note: BPZ PDFs are not properly normalized. In order to be true PDFs, we want $\int_{-\infty}^{\infty} p(z) dz = 1$, but the data file entries satisfy $\sum _{z=z_min}^{z_{max}} p(z) = 1$, which is not in general the same. `qp` approximates the desired integral as $1 = \int p(z) dz \approx \Delta_{z} \sum_{z=z_{min}}^{z_{max}} p(z)$ where $\Delta_{z} = \frac{z_{max} - z_{min}}{N_{ff}}$, where the native format PDF is evaluated at $N_{ff}$ redshifts. Approximating the BPZ $p(z)'s$ Let's pick out a galaxy with an interesting $p(z)$ to turn into a `qp.PDF` object initialized with a gridded parametrization.
###Code
if dataset_key == 'Euclid':
chosen = 5390
elif dataset_key == 'LSST':
# chosen = 108019
indices = [ 12543, 52661, 46216, 53296, 95524, 84574 , 2607 ,56017 , 64794, 7600]
chosen = indices[9]
start_time = timeit.default_timer()
G = qp.PDF(gridded=(dataset_info[dataset_key]['z_grid'], pdfs[chosen]))
print(timeit.default_timer() - start_time)
G.plot()
###Output
_____no_output_____
###Markdown
`qp` cannot currently convert gridded PDFs to histograms or quantiles - we need to make a GMM first, and use this to instantiate a `qp.PDF` object using a `qp.composite` object based on that GMM as `qp.PDF.truth`. The number of parameters necessary for a qualitatively good fit depends on the characteristics of the dataset.
###Code
if dataset_key == 'Euclid':
nc_needed = 3
elif dataset_key == 'LSST':
nc_needed = 5
dataset_info[dataset_key]['N_GMM'] = nc_needed
###Output
_____no_output_____
###Markdown
We can fit a GMM directly to the gridded PDF (via an internal interpolation). The direct fit, however, is not guaranteed to converge, particularly if the underlying distribution is not actually well-described by a weighted sum of Gaussians -- this is why storing the GMM parameters instead of a non-parametric format can be dangerous.
###Code
start_time = timeit.default_timer()
G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='gridded', vb=True)
time = timeit.default_timer() - start_time
print(str(time)+' for GMM fit to gridded')
G.plot()
###Output
_____no_output_____
###Markdown
The alternative is to take a large number of samples and fit a GMM to those (via the same internal interpolation). We can check that the fits are very similar. Though it is slower, we will sample before fitting to guarantee convergence.
###Code
high_res = 1000
start_time = timeit.default_timer()
G.sample(high_res, vb=False)
G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='samples', vb=True)
time = timeit.default_timer() - start_time
print(str(time)+' for GMM fit to samples')
G.plot()
###Output
_____no_output_____
###Markdown
The `qp.composite` object can be used as the `qp.PDF.truth` to initialize a new `qp.PDF` object that doesn't have any information about the gridded or sample approximations but has a qualitatively similar shape and is thus "realistically complex" enough to draw conclusions about real data. Now we can approximate it any way we like! Consider this example for $N_f=7$ parameters.
###Code
N_f = 7
M = qp.PDF(truth=G.mix_mod, limits=dataset_info[dataset_key]['z_lim'])
M.quantize(N=N_f, vb=False)
M.histogramize(N=N_f, binrange=dataset_info[dataset_key]['z_lim'], vb=False)
M.sample(N=N_f, using='truth', vb=False)
M.plot(loc=dataset_key+'_example_pz.pdf', vb=True)
###Output
_____no_output_____
###Markdown
Quantifying the Accuracy of the ApproximationWe can also calculate the KLD metric on this `qp.PDF`. The KLD quantifies the information loss of an approximation of a PDF relative to the true PDF in units of nats. Thus, a lower KLD corresponds to more information being preserved in the approximation.
###Code
formats = ['quantiles', 'histogram', 'samples']
parametrizations = {}
for f in formats:
parametrizations[f] = {}
for ff in formats:
parametrizations[f][ff] = None
parametrizations['quantiles']['quantiles'] = M.quantiles
parametrizations['histogram']['histogram'] = M.histogram
parametrizations['samples']['samples'] = M.samples
dataset_info[dataset_key]['inits'] = parametrizations
klds = {}
P = qp.PDF(truth=M.truth)
for f in formats:
Q = qp.PDF(quantiles=dataset_info[dataset_key]['inits'][f]['quantiles'],
histogram=dataset_info[dataset_key]['inits'][f]['histogram'],
samples=dataset_info[dataset_key]['inits'][f]['samples'])
klds[f] = make_kld(P, Q)
print(klds)
###Output
_____no_output_____
###Markdown
Survey ModeWe want to compare parametrizations for large catalogs, so we'll need to be more efficient. The `qp.Ensemble` object is a wrapper for `qp.PDF` objects enabling conversions to be performed and metrics to be calculated in parallel. We'll experiment on a subsample of 100 galaxies.
###Code
n_gals_tot = len(pdfs)
n_gals_use = 100
full_gal_range = range(n_gals_tot)
subset = np.random.choice(full_gal_range, n_gals_use)
pdfs_use = pdfs[subset]
# using the same grid for output as the native format, but doesn't need to be so
dataset_info[dataset_key]['in_z_grid'] = dataset_info[dataset_key]['z_grid']
dataset_info[dataset_key]['metric_z_grid'] = dataset_info[dataset_key]['z_grid']
n_floats_use = 10
if dataset_key == 'Euclid':
dataset_info[dataset_key]['N_GMM'] = 3
elif dataset_key == 'LSST':
dataset_info[dataset_key]['N_GMM'] = 5
fit_components = dataset_info[dataset_key]['N_GMM']
n_moments_use = 3
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
###Output
_____no_output_____
###Markdown
We'll start by reading in our catalog of gridded PDFs, sampling them, fitting GMMs to the samples, and establishing a new `qp.Ensemble` object where each meber `qp.PDF` object has `qp.PDF.truth`$\neq$`None`.
###Code
def setup_from_grid(in_pdfs, z_grid, N_comps, high_res=1000):
#read in the data, happens to be gridded
zlim = (min(z_grid), max(z_grid))
N_pdfs = len(in_pdfs)
# plot_examples(N_pdfs, z_grid, pdfs)
print('making the initial ensemble of '+str(N_pdfs)+' PDFs')
E0 = qp.Ensemble(N_pdfs, gridded=(z_grid, in_pdfs), vb=True)
print('made the initial ensemble of '+str(N_pdfs)+' PDFs')
#fit GMMs to gridded pdfs based on samples (faster than fitting to gridded)
print('sampling for the GMM fit')
samparr = E0.sample(high_res, vb=False)
print('took '+str(high_res)+' samples')
print('making a new ensemble from samples')
Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False)
print('made a new ensemble from samples')
print('fitting the GMM to samples')
GMMs = Ei.mix_mod_fit(comps=N_comps, vb=False)
print('fit the GMM to samples')
#set the GMMS as the truth
print('making the final ensemble')
Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False)
print('made the final ensemble')
return(Ef)
# return
def plot_examples(z_grid, pdfs, n_plot=6):
N_pdfs =len(pdfs)
randos = np.random.choice(range(N_pdfs), n_plot)
for i in range(n_plot):
plt.plot(z_grid, pdfs[randos[i]], label=dataset_key+r'\#'+str(randos[i]))
plt.xlabel(r'$z$', fontsize=16)
plt.ylabel(r'$p(z)$', fontsize=16)
plt.title(dataset_key+' mock catalog')
plt.legend()
plt.savefig('pz_placeholder_'+dataset_key+'.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
catalog = setup_from_grid(pdfs_use, dataset_info[dataset_key]['in_z_grid'],
fit_components)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
plot_examples(dataset_info[dataset_key]['in_z_grid'], pdfs_use, n_plot=6)
###Output
_____no_output_____
###Markdown
Next, we compute the KLD between each approximation and the truth for every member of the ensemble. We make the `qp.Ensemble.kld` into a `qp.PDF` object of its own to compare the moments of the KLD distributions for different parametrizations.
###Code
def analyze_individual(E, z_grid, N_floats, N_moments=4):
zlim = (min(z_grid), max(z_grid))
z_range = zlim[-1] - zlim[0]
delta_z = z_range / len(z_grid)
Eq, Eh, Es = E, E, E
inits = {}
for f in formats:
inits[f] = {}
for ff in formats:
inits[f][ff] = None
print('performing quantization')
inits['quantiles']['quantiles'] = Eq.quantize(N=N_floats, vb=False)
print('performing histogramization')
inits['histogram']['histogram'] = Eh.histogramize(N=N_floats, binrange=zlim, vb=False)
print('performing sampling')
inits['samples']['samples'] = Es.sample(samps=N_floats, vb=False)
print('making the approximate ensembles')
Eo ={}
for f in formats:
Eo[f] = qp.Ensemble(E.n_pdfs, truth=E.truth,
quantiles=inits[f]['quantiles'],
histogram=inits[f]['histogram'],
samples=inits[f]['samples'])
print('made the approximate ensembles')
print('calculating the individual metrics')
klds = {}
metrics = {}
moments = {}
for key in Eo.keys():
print('starting '+key)
klds[key] = Eo[key].kld(using=key, limits=zlim, dx=delta_z)
samp_metric = qp.PDF(samples=klds[key])
gmm_metric = samp_metric.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'],
using='samples')
metrics[key] = qp.PDF(truth=gmm_metric)
moments[key] = []
for n in range(N_moments+1):
moments[key].append([qp.utils.calculate_moment(metrics[key], n,
using=key,
limits=zlim,
dx=delta_z,
vb=False)])
print('finished with '+key)
print('calculated the individual metrics')
# plot_individual(klds, N_floats)
return(Eo, klds, moments)
def plot_individual(pz_klds, N_floats):
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
plot_bins = np.linspace(-3., 3., 20)
for key in pz_klds.keys():
plt.hist(np.log(pz_klds[key]), color=colors[key], alpha=0.5,
label=key, normed=True, bins=plot_bins)
plt.legend()
plt.ylabel('frequency')
plt.xlabel(r'$\log[KLD]$')
plt.title(dataset_key+r' dataset with $N_{f}='+str(N_floats)+r'$')
plt.savefig(dataset_key+'_metric_histogram_placeholder.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
(ensembles, pz_klds, metric_moments) = analyze_individual(catalog,
dataset_info[dataset_key]['metric_z_grid'],
n_floats_use,
n_moments_use)
dataset_info[dataset_key]['pz_klds'] = pz_klds
dataset_info[dataset_key]['pz_kld_moments'] = metric_moments
plot_individual(pz_klds, n_floats_use)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
###Output
_____no_output_____
###Markdown
Finally, we calculate metrics on the stacked estimator $\hat{n}(z)$ that is the average of all members of the ensemble.
###Code
def analyze_stacked(E0, E, z_grid):
zlim = (min(z_grid), max(z_grid))
z_range = zlim[-1] - zlim[0]
delta_z = z_range / len(z_grid)
parametrizations = E.keys()
print('stacking the ensembles')
stacked_pdfs = {}
for key in formats:
stacked_pdfs[key] = qp.PDF(gridded=E[key].stack(z_grid, using=key,
vb=False)[key])
stacked_pdfs['truth'] = qp.PDF(gridded=E0.stack(z_grid, using='truth',
vb=False)['truth'])
print('stacked the ensembles')
print('calculating the metrics')
klds = {}
for key in parametrizations:
klds[key] = qp.utils.calculate_kl_divergence(stacked_pdfs['truth'],
stacked_pdfs[key],
limits=zlim, dx=delta_z)
print('calculated the metrics')
# plot_estimators(z_grid, stacked_pdfs, klds)
return(stacked_pdfs, klds)
def plot_estimators(z_grid, stacked_pdfs, klds):
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers')
plt.plot(z_grid, stacked_pdfs['truth'].evaluate(z_grid, vb=False)[1], color='black', lw=4, alpha=0.3, label='truth')
for key in formats:
plt.plot(z_grid, stacked_pdfs[key].evaluate(z_grid, vb=False)[1], label=key+' KLD='+str(klds[key]), color=colors[key])
plt.xlabel(r'$z$')
plt.ylabel(r'$\hat{n}(z)$')
plt.legend()
plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers')
plt.savefig(dataset_key+'_nz_comparison.png', dpi=250)
# pr = cProfile.Profile()
# pr.enable()
(stack_evals, nz_klds) = analyze_stacked(catalog, ensembles, dataset_info[dataset_key]['metric_z_grid'])
dataset_info[dataset_key]['nz_ests'] = stack_evals
dataset_info[dataset_key]['nz_klds'] = nz_klds
plot_estimators(dataset_info[dataset_key]['metric_z_grid'], stack_evals, nz_klds)
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(s.getvalue())
###Output
_____no_output_____
###Markdown
We save the data so we can remake the plots later without running everything again. ScalingWe'd like to do this for many values of $N_{f}$ as well as larger catalog subsamples, repeating the analysis many times to establish error bars on the KLD as a function of format, $N_{f}$, and dataset. The things we want to plot across multiple datasets/number of parametes are:1. KLD of stacked estimator, i.e. `N_f` vs. `nz_output[dataset][format][instantiation][KLD_val_for_N_f]`2. moments of KLD of individual PDFs, i.e. `n_moment, N_f` vs. `pz_output[dataset][format][n_moment][instantiation][moment_val_for_N_f]`So, we ned to make sure these are saved!
###Code
if os.path.exists('nz_metrics.hkl'):
with open('nz_metrics.hkl', 'r') as nz_file:
#read in content of list/dict
nz_stats = hickle.load(nz_file)
else:
nz_stats = {}
nz_stats['N_f'] = []
if N_f not in nz_stats['N_f']:
nz_stats['N_f'].append(N_f)
where_N_f = nz_stats['N_f'].index(N_f)
if dataset_key not in nz_stats.keys():
nz_stats[dataset_key] = {}
for f in parametrizations:#change this name to formats
nz_stats[dataset_key][f] = [[]]
for f in parametrizations:
nz_stats[dataset_key][f][where_N_f].append(dataset_info[dataset_key]['nz_klds'][f])
with open('nz_metrics.hkl', 'w') as nz_file:
hickle.dump(nz_stats, nz_file)
###Output
_____no_output_____
###Markdown
We want to plot the KLD on $\hat{n}(z)$ for all formats as $N_{f}$ changes. We want to repeat this for many subsamples of the catalog to establush error bars on the KLD values.
###Code
with open('nz_metrics.hkl', 'r') as nz_file:
nz_stats = hickle.load(nz_file)
colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'}
# need to get some version of this working from nz_klds
plt.figure(figsize=(5, 5))
for f in parametrizations.keys():
data_arr = np.swapaxes(np.array(nz_stats[dataset_key][f]), 0, 1)#turn N_f * instantiations into instantiations * N_f
n_i = len(data_arr)
a = 1./n_i
plt.plot([2 * max(nz_stats['N_f']), 2 * max(nz_stats['N_f'])], [1., 10.], color=colors[f], alpha=a, label=f)
for i in data_arr:
# will be regular plot not scatter with more N_f options
plt.plot(nz_stats['N_f'], i[0], color=colors[f], alpha=a)
plt.semilogy()
plt.semilogx()
plt.xlim(min(nz_stats['N_f'])-1, max(nz_stats['N_f'])+1)
plt.ylim(1., 10.)
plt.xlabel(r'number of parameters')
plt.ylabel(r'KLD')
plt.legend()
plt.title(r'$\hat{n}(z)$ KLD on '+str(n_gals_use)+' from '+dataset_key)
plt.savefig(dataset_key+'_nz_metrics_placeholder.png', dpi=250)
# won't really know how this looks without more N_f tested
###Output
_____no_output_____
###Markdown
We want to plot the moments of the KLD distribution for each format as $N_{f}$ changes.
###Code
if os.path.exists('pz_metrics.hkl'):
with open('pz_metrics.hkl', 'r') as pz_file:
#read in content of list/dict
pz_stats = hickle.load(pz_file)
else:
pz_stats = {}
pz_stats['N_f'] = []
if N_f not in pz_stats['N_f']:
pz_stats['N_f'].append(N_f)
where_N_f = pz_stats['N_f'].index(N_f)
if dataset_key not in pz_stats.keys():
pz_stats[dataset_key] = {}
for f in parametrizations:#change this name to formats
pz_stats[dataset_key][f] = []
for m in range(n_moments_use + 1):
pz_stats[dataset_key][f].append([[]])
if N_f not in pz_stats['N_f']:
pz_stats[dataset_key][f][m].append([])
for f in parametrizations:
for m in range(n_moments_use + 1):
pz_stats[dataset_key][f][m][where_N_f].append(dataset_info[dataset_key]['pz_kld_moments'][f][m])
with open('pz_metrics.hkl', 'w') as pz_file:
hickle.dump(pz_stats, pz_file)
with open('pz_metrics.hkl', 'r') as pz_file:
pz_stats = hickle.load(pz_file)
def make_patch_spines_invisible(ax):
ax.set_frame_on(True)
ax.patch.set_visible(False)
for sp in ax.spines.values():
sp.set_visible(False)
shapes = ['o','+','x','v','^','<','>']
fig, ax = plt.subplots()
fig.subplots_adjust(right=1.)
ax_n = ax
for key in parametrizations.keys():
ax_n.plot([-1], [0], color=colors[key], label=key)
for n in range(1, 4):
ax.scatter([-1], [0], color='k', marker=shapes[n-1], label='moment '+str(n))
if n>1:
ax_n = ax.twinx()
if n>2:
ax_n.spines["right"].set_position(("axes", 1. + 0.1 * (n-1)))
make_patch_spines_invisible(ax_n)
ax_n.spines["right"].set_visible(True)
for f in parametrizations.keys():
data_arr = np.swapaxes(np.array(pz_stats[dataset_key][f][n]), 0, 1)
n_i = len(data_arr)
a = 1./n_i
for i in data_arr:
ax_n.scatter(pz_stats['N_f'], i, marker=shapes[n-1], color=colors[f], alpha=a)
ax_n.set_ylabel('moment '+str(n))
ax.set_xlim(1,1000)#should be N_f range and logged
ax.semilogx()
ax.set_xlabel('number of parameters')
ax.legend()
fig.suptitle('KLD moments on '+str(n_gals_use)+' from '+dataset_key)
fig.savefig(dataset_key+'_pz_metrics_placeholder.png', dpi=250)
###Output
_____no_output_____
###Markdown
Okay, now all I have to do is have this loop over both datasets, number of galaxies, and number of floats! Everything after here is scratch. That's all, folks!
###Code
## everything works above here! now it's time to make plots from this output!
# # Function to test the experimental qp.Ensemble object!
# def analyze():#(pdfs, N_comps, z, N_floats):
# #read in the data, happens to be gridded
# z_low, z_high = min(z), max(z)
# N_pdfs = len(pdfs)
# out_E = {}
# E0 = qp.Ensemble(N_pdfs, gridded=(z, pdfs), vb=False)
# #fit gridded pdfs as GMMs based on samples
# samparr = E0.sample(1000, vb=False)
# print(np.shape(samparr))
# Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False)
# GMMs = Ei.mix_mod_fit(comps=N_comps, using='samples', vb=False)
# # out_E['GMMs'] = []
# # for GMM in GMMs:
# # out_E['GMMs'].append(GMM.functions[0].stats())
# #set the GMMS as the truth
# Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False)
# #stack them and save the output
# out_E['truth'] = Ef.stack(z, using='mix_mod', vb=False)
# # #evaluate as gridded and save the output
# # Et = qp.Ensemble(N_pdfs, gridded=Ef.evaluate(z))
# # out_E['gridded'] = Et.stack(z, using='gridded')
# #evaluate as quantiles and save the output
# Eq = qp.Ensemble(N_pdfs, quantiles=Ef.quantize(N=N_floats), vb=False)
# #q_stack = Eq.stack(z, using='quantiles')
# out_E['quantiles'] = Eq.stack(z, using='quantiles', vb=False)
# # #evaluate as histogram and save the output
# # Eh = qp.Ensemble(N_pdfs, histogram=Ef.histogramize(N=N_floats, binrange=(z_low, z_high)))
# # #h_stack = Eh.stack(z, using='histogram')
# # out_E['histogram'] = Eh.stack(z, using='histogram')
# # #evaluate as samples and save the output
# # Es = qp.Ensemble(N_pdfs, samples=Ef.sample(samps=N_floats))
# # #s_stack = Es.stack(z, using='samples')
# # out_E['samples'] = Es.stack(z, using='samples')
# return(out_E)#, KLDs, RMSEs)
###Output
_____no_output_____
###Markdown
Let's run a test with 100 galaxies and 10 parameters. This should take about 5 minutes or so.
###Code
# print(n_gals_use, n_floats_use, s.getvalue())
###Output
_____no_output_____
###Markdown
Let's show the stacked versions and compute metrics.
###Code
# print(results.keys())
# print(results['truth']['mix_mod'])
# KLDs, RMSEs = {}, {}
# P = qp.PDF(gridded=results['truth']['mix_mod'])
# metric_keys = results.keys()
# metric_keys.remove('truth')
# for est in metric_keys:
# Q = qp.PDF(gridded=results[est][est])
# KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False)
# RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False)
# plt.plot(results[est][est][0], results[est][est][1], label=est)
# plt.legend()
# print(KLDs, RMSEs)
###Output
_____no_output_____
###Markdown
Things are quite broken after this point!
###Code
# P = qp.PDF(gridded=stack_ests['truth'])
# KLDs, RMSEs = {}, {}
# for est in .keys():
# Q = qp.PDF(gridded=stack_ests[est])
# KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False)
# RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False)
###Output
_____no_output_____
###Markdown
Let's plot the log standard deviations of the first component of the mixture models.
###Code
# moments = np.array(results['stats']).T
# fit_stats = moments[1]
# plt.hist(np.log(fit_stats))
###Output
_____no_output_____
###Markdown
Let's check the distribution of standard deviations of the ensemble.
###Code
# D = qp.PDF(samples = np.log(fit_stats))
# T = D.mix_mod_fit(n_components=1)
# D.plot()
# print(np.exp(T.functions[0].stats()))
###Output
_____no_output_____
###Markdown
Now enough of the `qp.Ensemble` functionality has been implemented to merge into the `master` branch!
###Code
# this ends the test of the experimental qp.Ensemble object
# you may now return to your regularly scheduled programming
# def analyze_one(index, N_comps, z, N_floats, logfilename='logfile.txt', vb=False):
# """
# Model the input BPZ P(z) as a GMM, approximate that GMM in
# various ways, and assess the quality of each approximation.
# Parameters
# ----------
# index : int
# ID of galaxy
# N_comps : int
# Number of components used in GMM
# N_floats : int
# Number of floats used to parametrize the P(z)
# z : float, ndarr
# Redshift array for input gridded "truth". Used for
# evaluating n(z) too
# logfilename: string
# where to put logging information
# vb : boolean
# Verbose output?
# Returns
# -------
# result : dict
# Dictionary containing metric values, n(z) on standard
# grid, samples, "true" GMM gridded p(z).
# Notes
# -----
# In some cases the GMM does not fit well, leading to bad KLD and
# RMSE values when it is compared to the truth.
# """
# # # Make z array if we don't already have it:
# # if z is None:
# # z = np.arange(0.01, 3.51, 0.01, dtype='float')
# dz = (max(z) - min(z)) / len(z)
# zlimits = [min(z), max(z)]
# # Make a dictionary to contain the results:
# result = {}
# # Make a GMM model of the input BPZ p(z) (which are stored
# # in the global 'pdfs' variable:
# G = qp.PDF(gridded=(z, pdfs[index]), vb=vb)
# # Draw 1000 samples, fit a GMM model to them, and make a true PDF:
# G.sample(1000, vb=vb)
# GMM = G.mix_mod_fit(n_components=N_comps, vb=vb)
# P = qp.PDF(truth=GMM, vb=vb)
# # Evaluate the GMM on the z grid, and store in the result dictionary. We'll
# # need this to make our "true" n(z) estimator. We don't need to keep the
# # z array, as we passed that in.
# result['truth'] = P.evaluate(z, using='truth', vb=vb)[1]
# # Now approximate P in various ways, and assess:
# Q, KLD, RMSE, approximation = {}, {}, {}, {}
# Q['quantiles'] = qp.PDF(quantiles=P.quantize(N=N_floats, vb=vb), vb=vb)
# Q['histogram'] = qp.PDF(histogram=P.histogramize(N=N_floats, binrange=zlimits, vb=vb), vb=vb)
# Q['samples'] = qp.PDF(samples=P.sample(N=N_floats, vb=vb), vb=vb)
# for k in Q.keys():
# KLD[k] = qp.calculate_kl_divergence(P, Q[k], limits=zlimits, dx=dz, vb=vb)
# RMSE[k] = qp.calculate_rmse(P, Q[k], limits=zlimits, dx=dz, vb=vb)
# approximation[k] = Q[k].evaluate(z, using=k, vb=vb)[1]
# # Store approximations:
# result['KLD'] = KLD
# result['RMSE'] = RMSE
# result['approximation'] = approximation
# result['samples'] = Q['samples'].samples
# with open(logfilename, 'a') as logfile:
# logfile.write(str((index, timeit.default_timer() - start_time))+'\n')
# return result
###Output
_____no_output_____
###Markdown
OK, now lets's collate the metrics for the first 100 galaxies over a variable number of parameters, and look at the distribution of metric values. We're using multiprocessing because the `for` loop is slow; the rate-limiting step is the optimization routine for finding quantiles of a GMM.
###Code
# def one_analysis(N):
# all_results[str(N)] = []
# pr = cProfile.Profile()
# pr.enable()
# # with qp.Ensemble
# n_gals_tot = len(pdfs)
# full_gal_range = range(n_gals_tot)
# subset = np.random.choice(full_gal_range, n_gals)
# pdfs_use = pdfs[subset]
# all_results[str(N)] = analyze(pdfs_use, nc_needed, z, N)
# # # if multiprocessing:
# # logfilename = dataname + str(n_gals) + 'multi' + str(N)+'.txt'
# # def help_analyze(i):
# # return analyze_one(i, nc_needed, z, N, logfilename=logfilename)
# # pool = Pool(psutil.cpu_count() - 1)
# # results = pool.map(help_analyze, range(n_gals))
# # all_results[str(N)] = results
# # # tl;dr Tmax=270s for N_floats=3, 100 galaxies, 3 processors
# # # if looping:
# # logfilename = dataname + str(n_gals) + 'loop' + str(N)+'.txt'
# # for i in range(100):
# # all_results[str(N)].append(analyze_one(i, 2, z, N, logfilename=logfilename))
# # if i%10 == 0: print('.', end='')
# # # tl;dr Tmax=352s for N_floats=3, 100 galaxies
# pr.disable()
# s = StringIO.StringIO()
# sortby = 'cumtime'
# ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
# ps.print_stats()
# print(N, s.getvalue())
# return
# #%%time
# float_numbers = [3]#, 10, 30, 100]
# n_float_numbers = len(float_numbers)
# # gal_numbers = [100]#, 1000, 10000]
# # n_gal_numbers = len(gal_numbers)
# # total_results ={}
# # for M in gal_numbers:
# # n_gals = M
# n_gals = 100
# all_results = {}
# for N in float_numbers:
# start_time = timeit.default_timer()
# one_analysis(N)
# # total_results[str(n_gals)] = all_results
###Output
_____no_output_____
###Markdown
Since the previous step is quite slow (on the order of 5 minutes per test of different numbers of parameters for my laptop), this is a good point to save the results. We can load them from the file later and not remake them if we only want to do the rest of the analysis.
###Code
# with open('all_results.hkl', 'w') as result_file:
# hickle.dump(all_results, result_file)
# with open('all_results.hkl', 'r') as result_file:
# all_results = hickle.load(result_file)
# all_results = total_results[str(gal_numbers[0])]
# all_KLD, all_RMSE = [], []
# for n in range(n_float_numbers):
# KLD, RMSE = {}, {}
# for approximation in all_results[str(float_numbers[n])][0]['KLD'].keys():
# x = np.array([])
# for k in range(len(all_results[str(float_numbers[n])])):
# x = np.append(x, all_results[str(float_numbers[n])][k]['KLD'][approximation])
# KLD[approximation] = x
# x = np.array([])
# for k in range(len(all_results[str(float_numbers[n])])):
# x = np.append(x, all_results[str(float_numbers[n])][k]['RMSE'][approximation])
# RMSE[approximation] = x
# all_KLD.append(KLD)
# all_RMSE.append(RMSE)
###Output
_____no_output_____
###Markdown
Now let's plot histograms of the metric values.
###Code
# colors = {'samples':'green', 'quantiles':'blue', 'histogram':'red'}
# plt.figure(figsize=(12, 5 * n_float_numbers))
# i=0
# for n in range(n_float_numbers):
# i += 1
# # Lefthand panel: KLD
# plt.subplot(n_float_numbers, 2, i)
# plt.title('KLD for '+str(float_numbers[n])+' stored numbers')
# bins = np.linspace(0.0, 5., 25)
# for k in ['samples', 'quantiles', 'histogram']:
# plt.hist(all_KLD[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True)
# #plt.semilogx()
# plt.xlabel('KL Divergence Metric', fontsize=16)
# plt.ylim(0., 5.0)
# plt.xlim(0., 5.0)
# plt.legend()
# i += 1
# # Righthand panel: RMSE
# plt.subplot(n_float_numbers, 2, i)#+n_numbers)
# plt.title('RMSE for '+str(float_numbers[n])+' stored numbers')
# bins = np.linspace(0.0, 5., 25)
# for k in ['samples', 'quantiles', 'histogram']:
# plt.hist(all_RMSE[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True)
# #plt.semilogx()
# plt.xlabel('RMS Error Metric', fontsize=16)
# plt.ylim(0., 5.0)
# plt.xlim(0., 5.0)
# plt.legend();
# plt.savefig('money.png')
###Output
_____no_output_____
###Markdown
Interestingly, the metrics don't agree, nor is the behavior consistent across different numbers of parameters. However, as the number of parameters increases, the distribution of the metrics converge to lower numbers.KLD seems to flag more "bad" approximations than RMSE. How do we know where to set the threshold in each metric? We should think of the right way to get a summary statistic (first moment?) on the ensemble of KLD or RMSE values so we can make the plot of number of parameters vs. quality of approximation. Now lets compute the estimated $n(z)$. We'll do this with the GMM "truth", and then using each of our approximations. And we'll normalize the $n(z)$ to account for lost systems with bad approximations.
###Code
# plt.figure(figsize=(6, 5 * n_float_numbers))
# all_n = []
# all_x = []
# all_y = []
# for i in range(n_float_numbers):
# results = all_results[str(float_numbers[i])]
# n = {}
# # Pull out all truths and compute the average at each z:
# x = np.zeros([len(z), len(results)])
# y = {}
# for approx in ['samples', 'quantiles', 'histogram']:
# y[approx] = np.zeros([len(z), len(results)])
# for k in range(len(results)):
# y[approx][:,k] = results[k]['approximation'][approx]
# for k in range(len(results)):
# x[:,k] = results[k]['truth']
# # Now do the averaging to make the estimators:
# n['truth'] = np.mean(x, axis=1)
# n['truth'] /= np.sum(n['truth']) * delta_z
# for approx in ['samples', 'quantiles', 'histogram']:
# n[approx] = np.mean(y[approx], axis=1)
# n[approx] /= np.sum(n[approx]) * delta_z
# all_n.append(n)
# all_x.append(x)
# all_y.append(y)
# # Note: this uses the samples' KDE to make the approximation. We could (and
# # should!) also try simply concatenating the samples and histogramming them.
# # Plot truth and all the approximations.
# # The NaNs in the histogram approximation make that unplottable for now.
# plt.subplot(n_float_numbers, 1, i+1)#+n_numbers)
# plt.title(r'$n(z)$ for '+str(float_numbers[i])+' numbers')
# plt.plot(z, n['truth'], color='black', lw=4, alpha=0.3, label='truth')
# for k in ['samples', 'quantiles', 'histogram']:
# plt.plot(z, n[k], label=k, color=colors[k])
# plt.xlabel('redshift z')
# plt.ylabel('n(z)')
# plt.legend();
# plt.savefig('nz_comparison.png', dpi=300)
###Output
_____no_output_____
###Markdown
The "samples" approximation gives the best result for the $n(z)$ estimator even with a small number of samples. However, once the number of parameters increases slightly, the "quantiles" approximation performs similarly. It takes a large number of parameters before the "histogram" approximation approaches the other options. Let's use the `qp.PDF` object to compare them quantitatively (since $n(z)$ can be normalized to give the global $p(z)$).
###Code
# all_p = []
# for i in range(n_float_numbers):
# n = all_n[i]
# p = {}
# for k in ['samples', 'quantiles', 'histogram']:
# p[k] = qp.PDF(gridded=(z,n[k]), vb=False)
# p['truth'] = qp.PDF(gridded=(z,n['truth']), vb=False)
# all_p.append(p)
# all_KLD_nz, all_RMSE_nz = {}, {}
# zlimits, dz = [z_low, z_high], 0.01
# for k in ['samples', 'quantiles', 'histogram']:
# p = all_p[i]
# KLD_nz, RMSE_nz = [], []
# for i in range(n_float_numbers):
# KLD_nz.append(qp.calculate_kl_divergence(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False))
# RMSE_nz.append(qp.calculate_rmse(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False))
# all_KLD_nz[k] = KLD_nz
# all_RMSE_nz[k] = RMSE_nz
# plt.figure(figsize=(12, 5))
# both = [plt.subplot(1, 2, i+1) for i in range(2)]
# KLD_plot = both[0]
# RMSE_plot = both[1]
# KLD_plot.set_title(r'KLD for $n(z)$')
# RMSE_plot.set_title(r'RMSE for $n(z)$')
# KLD_plot.set_xlabel('number of parameters')
# RMSE_plot.set_xlabel('number of parameters')
# KLD_plot.set_ylabel('KLD')
# RMSE_plot.set_ylabel('RMSE')
# # KLD_plot.semilogx()
# # KLD_plot.semilogy()
# # RMSE_plot.semilogx()
# # RMSE_plot.semilogy()
# for k in ['samples', 'quantiles', 'histogram']:
# KLD_plot.plot(float_numbers, all_KLD_nz[k], color=colors[k], label=k)
# RMSE_plot.plot(float_numbers, all_RMSE_nz[k], color=colors[k], label=k)
# KLD_plot.semilogy()
# KLD_plot.semilogx()
# RMSE_plot.semilogy()
# RMSE_plot.semilogx()
# KLD_plot.legend()
# RMSE_plot.legend()
# plt.savefig('summary.png')
# print('KLD metrics for n(z) estimator: ', all_KLD_nz)
# print('RMSE metrics for n(z) estimator: ', all_RMSE_nz)
###Output
_____no_output_____ |
Course2_Improve DNN/ww1/Regularization_v2a.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use. Updates to Assignment If you were working on a previous version* The current notebook filename is version "2a". * You can find your work in the file directory as version "2".* To see the file directory, click on the Coursera logo at the top left of the notebook. List of Updates* Clarified explanation of 'keep_prob' in the text description.* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%* Updated print statements and 'expected output' for easier visual comparisons.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724213
Cost after iteration 20000: 0.13851642423253263
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (1./m)*(lambd/2.)*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
###Output
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**:```dW1 = [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]]dW2 = [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]]dW3 = [[-1.77691347 -0.11832879 -0.09397446]]``` Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.26809163371273015
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.This python statement: `X = (X < keep_prob).astype(int)` is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :```for i,v in enumerate(x): if v < keep_prob: x[i] = 1 else: v >= keep_prob x[i] = 0```Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = (D1 < keep_prob).astype(int) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = (D2 < keep_prob).astype(int) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
###Output
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: ```dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]]dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]]``` Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
Examples/DecisionTree.ipynb | ###Markdown
Decision Tree with the Iris DatasetFor an explanation of decision trees, see [our course notes](https://jennselby.github.io/MachineLearningCourseNotes/decision-trees).This notebook uses example code from http://scikit-learn.org/stable/modules/tree.html. Instructions0. If you haven't already, follow [the setup instructions here](https://jennselby.github.io/MachineLearningCourseNotes/setting-up-python3) to get all necessary software installed.0. Install the software specific to this notebook, as explained in the [Setup](Setup) section.0. Read through the code in the following sections: * [Iris Dataset](Iris-Dataset) * [Visualization of Dataset](Visualization-of-Dataset) * [Model Training](Model-Training) * [Visualization of Model Output](Visualization-of-Model-Output) * [Prediction](Prediction)0. Complete one or both exercise options: * [Exercise Option 1 - Standard Difficulty](Exercise-Option-1---Standard-Difficulty) * [Exercise Option 2 - Advanced Difficulty](Exercise-Option-2---Advanced-Difficulty) SetupBefore you can run this code, you will need to install some extra software.1. Install homebrew (if you don't already have it) following the [directions on their site](https://brew.sh/).1. Install the graphviz library that will let us visualize the decision tree. In Terminal, run>`brew install graphviz`1. Install the pydot library that allows you to call graphviz from Python. In Terminal run>`pip3 install pydot`.
###Code
from sklearn.datasets import load_iris # the iris dataset is included in scikit-learn
from sklearn import tree # for fitting our model
# these are all needed for the particular visualization we're doing
from six import StringIO
import pydot
import os.path
# to display graphs in this notebook
%matplotlib inline
import matplotlib.pyplot
###Output
_____no_output_____
###Markdown
Iris DatasetBefore you go on, make sure you understand this dataset. Modify the cell below to examine different parts of the dataset that are contained in the 'iris' dictionary object.What are the features? What are we trying to classify?
###Code
iris = load_iris()
iris.keys()
###Output
_____no_output_____
###Markdown
You can also try looking at it using a [pandas dataframe](https://jennselby.github.io/MachineLearningCourseNotes/pandas).
###Code
import pandas
iris_df = pandas.DataFrame(iris.data)
iris_df.columns = iris.feature_names
iris_df['target'] = [iris.target_names[target] for target in iris.target]
iris_df.head()
iris_df.describe()
###Output
_____no_output_____
###Markdown
Visualization of DatasetLet's visualize our dataset, so that we can better understand what it looks like.Change the first two variables to change which features you are looking at.
###Code
# Plot two of the features (the first and fourth columns, in this case)
x1_feature = 0
x2_feature = 3
x1 = iris.data[:,x1_feature]
x2 = iris.data[:,x2_feature]
# The data are in order by type. Find out where the other types start
start_type_one = list(iris.target).index(1)
start_type_two = list(iris.target).index(2)
# create a figure and label it
fig = matplotlib.pyplot.figure()
fig.suptitle('Two Features of the Iris Data Set')
matplotlib.pyplot.xlabel(iris.feature_names[x1_feature])
matplotlib.pyplot.ylabel(iris.feature_names[x2_feature])
# put the input data on the graph, with different colors and shapes for each type
scatter_0 = matplotlib.pyplot.scatter(x1[:start_type_one], x2[:start_type_one],
c="red", marker="o", label=iris.target_names[0])
scatter_1 = matplotlib.pyplot.scatter(x1[start_type_one:start_type_two], x2[start_type_one:start_type_two],
c="blue", marker="^", label=iris.target_names[1])
scatter_2 = matplotlib.pyplot.scatter(x1[start_type_two:], x2[start_type_two:],
c="yellow", marker="*", label=iris.target_names[2])
# add a legend to explain which points are which
matplotlib.pyplot.legend(handles=[scatter_0, scatter_1, scatter_2])
# show the graph
matplotlib.pyplot.show()
###Output
_____no_output_____
###Markdown
Model TrainingNext, we want to fit our decision tree model to the iris data we're using.
###Code
# Train the model
model = tree.DecisionTreeClassifier()
model.fit(iris.data, iris.target)
###Output
_____no_output_____
###Markdown
Visualization of Model OutputUsing graphviz and pydot, we can create a flowchart that shows the model decisions. The flowchart will be printed to a PDF on your desktop.
###Code
dot_data = StringIO()
tree.export_graphviz(model, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names,
filled=True, rounded=True, special_characters=True)
graph = pydot.graph_from_dot_data(dot_data.getvalue())[0]
graph.write_pdf(os.path.expanduser("~/Desktop/iris_decision_tree.pdf"))
###Output
_____no_output_____
###Markdown
PredictionNow we can make some predictions using the trained model. We'll pull out some examples from our training data and see what the model says about them.
###Code
# Use the first input from each class
inputs = [iris.data[0], iris.data[start_type_one], iris.data[start_type_two]]
print('Class predictions: {0}'.format(model.predict(inputs))) # guess which class
print('Probabilities:\n{0}'.format(model.predict_proba(inputs))) # give probability of each class
###Output
Class predictions: [0 1 2]
Probabilities:
[[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]]
|
Course5_Sequence-Models/Week3/Neural Machine Translation/Neural+machine+translation+with+attention+-+v4.ipynb | ###Markdown
Neural Machine TranslationWelcome to your first programming assignment for this week! You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Let's load all the packages you will need for this assignment.
###Code
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
1 - Translating human readable dates into machine readable datesThe model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task. The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) and translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - DatasetWe will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
###Code
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
###Output
_____no_output_____
###Markdown
You've loaded:- `dataset`: a list of tuples of (human readable date, machine readable date)- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index - `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with `human_vocab`. - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
###Code
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
###Output
X.shape: (10000, 30)
Y.shape: (10000, 10)
Xoh.shape: (10000, 30, 37)
Yoh.shape: (10000, 10, 11)
###Markdown
You now have:- `X`: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x$ values with a special character (). `X.shape = (m, Tx)`- `Y`: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in `machine_vocab`. You should have `Y.shape = (m, Ty)`. - `Xoh`: one-hot version of `X`, the "1" entry's index is mapped to the character thanks to `human_vocab`. `Xoh.shape = (m, Tx, len(human_vocab))`- `Yoh`: one-hot version of `Y`, the "1" entry's index is mapped to the character thanks to `machine_vocab`. `Yoh.shape = (m, Tx, len(machine_vocab))`. Here, `len(machine_vocab) = 11` since there are 11 characters ('-' as well as 0-9). Lets also look at some examples of preprocessed training examples. Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed.
###Code
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
###Output
Source date: 9 may 1998
Target date: 1998-05-09
Source after preprocessing (indices): [12 0 24 13 34 0 4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36
36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10 9 0 1 6 0 1 10]
Source after preprocessing (one-hot): [[ 0. 0. 0. ..., 0. 0. 0.]
[ 1. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]]
Target after preprocessing (one-hot): [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
2 - Neural machine translation with attentionIf you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention mechanismIn this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). **Figure 1**: Neural machine translation with attention Here are some properties of the model that you may notice: - There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism, we will call it *pre-attention* Bi-LSTM. The LSTM at the top of the diagram comes *after* the attention mechanism, so we will call it the *post-attention* LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps. - The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. - We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM. - The diagram on the right uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. Lets implement this model. You will start by implementing two functions: `one_step_attention()` and `model()`.**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{},a^{}, ..., a^{}]$) and the previous hidden state of the second LSTM ($s^{}$), `one_step_attention()` will compute the attention weights ($[\alpha^{},\alpha^{}, ..., \alpha^{}]$) and output the context vector (see Figure 1 (right) for details):$$context^{} = \sum_{t' = 0}^{T_x} \alpha^{}a^{}\tag{1}$$ Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$. **2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{},a^{}, ..., a^{}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{}$. **Exercise**: Implement `one_step_attention()`. The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:1. Define the layer objects (as global variables for examples).2. Call these objects when propagating the input.We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: [RepeatVector()](https://keras.io/layers/core/repeatvector), [Concatenate()](https://keras.io/layers/merge/concatenate), [Dense()](https://keras.io/layers/core/dense), [Activation()](https://keras.io/layers/core/activation), [Dot()](https://keras.io/layers/merge/dot).
###Code
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
###Output
_____no_output_____
###Markdown
Now you can use these layers to implement `one_step_attention()`. In order to propagate a Keras tensor object X through one of these layers, use `layer(X)` (or `layer([X,Y])` if it requires multiple inputs.), e.g. `densor(X)` will propagate X through the `Dense(1)` layer defined above.
###Code
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator([a, s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas, a])
### END CODE HERE ###
return context
###Output
_____no_output_____
###Markdown
You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function. **Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.
###Code
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
###Output
_____no_output_____
###Markdown
Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/bidirectional) [LSTM](https://keras.io/layers/recurrent/lstm)2. Iterate for $t = 0, \dots, T_y-1$: 1. Call `one_step_attention()` on $[\alpha^{},\alpha^{}, ..., \alpha^{}]$ and $s^{}$ to get the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{}$ and the new cell state $c^{}$. 3. Apply a softmax layer to $s^{}$, get the output. 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance, it should have three inputs ("inputs", $s^{}$ and $c^{}$) and output the list of "outputs".
###Code
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True, name='bidirectional_1'), merge_mode='concat')(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context, initial_state = [s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs=(X, s0, c0), outputs=outputs)
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the following cell to create your model.
###Code
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
###Output
_____no_output_____
###Markdown
Let's get a summary of the model to check if it matches the expected output.
###Code
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0]
lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0]
repeat_vector_1[0][0]
bidirectional_1[0][0]
repeat_vector_1[1][0]
bidirectional_1[0][0]
repeat_vector_1[2][0]
bidirectional_1[0][0]
repeat_vector_1[3][0]
bidirectional_1[0][0]
repeat_vector_1[4][0]
bidirectional_1[0][0]
repeat_vector_1[5][0]
bidirectional_1[0][0]
repeat_vector_1[6][0]
bidirectional_1[0][0]
repeat_vector_1[7][0]
bidirectional_1[0][0]
repeat_vector_1[8][0]
bidirectional_1[0][0]
repeat_vector_1[9][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0]
concatenate_1[1][0]
concatenate_1[2][0]
concatenate_1[3][0]
concatenate_1[4][0]
concatenate_1[5][0]
concatenate_1[6][0]
concatenate_1[7][0]
concatenate_1[8][0]
concatenate_1[9][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0]
dense_1[1][0]
dense_1[2][0]
dense_1[3][0]
dense_1[4][0]
dense_1[5][0]
dense_1[6][0]
dense_1[7][0]
dense_1[8][0]
dense_1[9][0]
____________________________________________________________________________________________________
attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0]
dense_2[1][0]
dense_2[2][0]
dense_2[3][0]
dense_2[4][0]
dense_2[5][0]
dense_2[6][0]
dense_2[7][0]
dense_2[8][0]
dense_2[9][0]
____________________________________________________________________________________________________
dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0]
bidirectional_1[0][0]
attention_weights[1][0]
bidirectional_1[0][0]
attention_weights[2][0]
bidirectional_1[0][0]
attention_weights[3][0]
bidirectional_1[0][0]
attention_weights[4][0]
bidirectional_1[0][0]
attention_weights[5][0]
bidirectional_1[0][0]
attention_weights[6][0]
bidirectional_1[0][0]
attention_weights[7][0]
bidirectional_1[0][0]
attention_weights[8][0]
bidirectional_1[0][0]
attention_weights[9][0]
bidirectional_1[0][0]
____________________________________________________________________________________________________
c0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0]
s0[0][0]
c0[0][0]
dot_1[1][0]
lstm_1[0][0]
lstm_1[0][2]
dot_1[2][0]
lstm_1[1][0]
lstm_1[1][2]
dot_1[3][0]
lstm_1[2][0]
lstm_1[2][2]
dot_1[4][0]
lstm_1[3][0]
lstm_1[3][2]
dot_1[5][0]
lstm_1[4][0]
lstm_1[4][2]
dot_1[6][0]
lstm_1[5][0]
lstm_1[5][2]
dot_1[7][0]
lstm_1[6][0]
lstm_1[6][2]
dot_1[8][0]
lstm_1[7][0]
lstm_1[7][2]
dot_1[9][0]
lstm_1[8][0]
lstm_1[8][2]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 11) 715 lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
lstm_1[9][0]
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
**Expected Output**:Here is the summary you should see **Total params:** 52,960 **Trainable params:** 52,960 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 64) **repeat_vector_1's output shape ** (None, 30, 64) **concatenate_1's output shape ** (None, 30, 128) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 64) **dense_3's output shape ** (None, 11) As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) (`learning rate = 0.005`, $\beta_1 = 0.9$, $\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:
###Code
### START CODE HERE ### (≈2 lines)
from keras import optimizers
opt = optimizers.Adam(lr=0.005, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.001)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
The last step is to define all your inputs and outputs to fit the model:- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_activation_LSTM_cell` with 0s.- Given the `model()` you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
###Code
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
###Output
_____no_output_____
###Markdown
Let's now fit the model and run it for one epoch.
###Code
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
###Output
Epoch 1/1
10000/10000 [==============================] - 42s - loss: 0.0545 - dense_3_loss_1: 3.0622e-04 - dense_3_loss_2: 1.7611e-04 - dense_3_loss_3: 4.4359e-04 - dense_3_loss_4: 9.8315e-04 - dense_3_loss_5: 2.0396e-05 - dense_3_loss_6: 0.0068 - dense_3_loss_7: 0.0039 - dense_3_loss_8: 1.1690e-05 - dense_3_loss_9: 0.0315 - dense_3_loss_10: 0.0104 - dense_3_acc_1: 1.0000 - dense_3_acc_2: 1.0000 - dense_3_acc_3: 1.0000 - dense_3_acc_4: 1.0000 - dense_3_acc_5: 1.0000 - dense_3_acc_6: 0.9981 - dense_3_acc_7: 0.9993 - dense_3_acc_8: 1.0000 - dense_3_acc_9: 0.9909 - dense_3_acc_10: 0.9976
###Markdown
While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
###Code
model.load_weights('models/model.h5')
###Output
_____no_output_____
###Markdown
You can now see the results on new examples.
###Code
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
###Output
source: 3 May 1979
output: 1979-05-03
source: 5 April 09
output: 2009-05-05
source: 21th of August 2016
output: 2016-08-21
source: Tue 10 Jul 2007
output: 2007-07-10
source: Saturday May 9 2018
output: 2018-05-09
source: March 3 2001
output: 2001-03-03
source: March 3rd 2001
output: 2001-03-03
source: 1 March 2001
output: 2001-03-01
###Markdown
You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the activations from the networkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model .
###Code
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0]
lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0]
repeat_vector_1[0][0]
bidirectional_1[0][0]
repeat_vector_1[1][0]
bidirectional_1[0][0]
repeat_vector_1[2][0]
bidirectional_1[0][0]
repeat_vector_1[3][0]
bidirectional_1[0][0]
repeat_vector_1[4][0]
bidirectional_1[0][0]
repeat_vector_1[5][0]
bidirectional_1[0][0]
repeat_vector_1[6][0]
bidirectional_1[0][0]
repeat_vector_1[7][0]
bidirectional_1[0][0]
repeat_vector_1[8][0]
bidirectional_1[0][0]
repeat_vector_1[9][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0]
concatenate_1[1][0]
concatenate_1[2][0]
concatenate_1[3][0]
concatenate_1[4][0]
concatenate_1[5][0]
concatenate_1[6][0]
concatenate_1[7][0]
concatenate_1[8][0]
concatenate_1[9][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0]
dense_1[1][0]
dense_1[2][0]
dense_1[3][0]
dense_1[4][0]
dense_1[5][0]
dense_1[6][0]
dense_1[7][0]
dense_1[8][0]
dense_1[9][0]
____________________________________________________________________________________________________
attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0]
dense_2[1][0]
dense_2[2][0]
dense_2[3][0]
dense_2[4][0]
dense_2[5][0]
dense_2[6][0]
dense_2[7][0]
dense_2[8][0]
dense_2[9][0]
____________________________________________________________________________________________________
dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0]
bidirectional_1[0][0]
attention_weights[1][0]
bidirectional_1[0][0]
attention_weights[2][0]
bidirectional_1[0][0]
attention_weights[3][0]
bidirectional_1[0][0]
attention_weights[4][0]
bidirectional_1[0][0]
attention_weights[5][0]
bidirectional_1[0][0]
attention_weights[6][0]
bidirectional_1[0][0]
attention_weights[7][0]
bidirectional_1[0][0]
attention_weights[8][0]
bidirectional_1[0][0]
attention_weights[9][0]
bidirectional_1[0][0]
____________________________________________________________________________________________________
c0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0]
s0[0][0]
c0[0][0]
dot_1[1][0]
lstm_1[0][0]
lstm_1[0][2]
dot_1[2][0]
lstm_1[1][0]
lstm_1[1][2]
dot_1[3][0]
lstm_1[2][0]
lstm_1[2][2]
dot_1[4][0]
lstm_1[3][0]
lstm_1[3][2]
dot_1[5][0]
lstm_1[4][0]
lstm_1[4][2]
dot_1[6][0]
lstm_1[5][0]
lstm_1[5][2]
dot_1[7][0]
lstm_1[6][0]
lstm_1[6][2]
dot_1[8][0]
lstm_1[7][0]
lstm_1[7][2]
dot_1[9][0]
lstm_1[8][0]
lstm_1[8][2]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 11) 715 lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
lstm_1[9][0]
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.The function `attention_map()` pulls out the attention values from your model and plots them.
###Code
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
###Output
_____no_output_____ |
ember2018-AHBO-TPE-hyperparameters comparison done.ipynb | ###Markdown
trying to find the best parameters for Ember Bayesian Optimization PrimerThe problem with grid and random search is that these are uninformed methods because they do not use the past results from different values of hyperparameters in the objective function (remember the objective function takes in the hyperparameters and returns the model cross validation score). We record the results of the objective function for each set of hyperparameters, but the algorithms do not select the next hyperparameter values from this information. Intuitively, if we have the past results, we should use them to reason about what hyperparameter values work the best and choose the next values wisely to try and spend more iterations evaluating promising values. Evaluating hyperparameters in the objective function is very time-consuming, and the concept of Bayesian optimization is to limit calls to the evaluation function by choosing the next hyperparameter values based on the previous results. This allows the algorithm to spend more time evaluating promising hyperparameter values and less time in low-scoring regions of the hyperparameter spaceBayesian optimization works by building a surrogate function (in the form of a probability model) of the objective function P(score|hyperparameters . The surrogate function is much cheaper to evaluate than the objective, so the algorithm chooses the next values to try in the objective based on maximizing a criterion on the surrogate (usually expected improvement).The surrogate function is based on past evaluation results - pairs of (score, hyperparameter) records - and is continually updated with each objective function evaluation. Bayesian optimization therefore uses Bayesian reasoning: form an initial model (called a prior) and then update it with more evidence. The idea is that as the data accumulates, the surrogate function gets closer and closer to the objective function, and the hyperparameter values that are the best in the surrogate function will also do the best in the objective function. Bayesian optimization methods differ in the algorithm used to build the surrogate function and choose the next hyperparameter values to try. Some of the common choices are Gaussian Process (implemented in Spearmint), Random Forest Regression (in SMAC), and the Tree Parzen Estimator (TPE) in Hyperopt. Four Part of Bayesian OptimizationBayesian hyperparameter optimization requires the same four parts as we implemented in grid and random search:Objective Function: takes in an input (hyperparameters) and returns a score to minimize or maximize (the cross validation score)Domain space: the range of input values (hyperparameters) to evaluateOptimization Algorithm: the method used to construct the surrogate function and choose the next values to evaluateResults: score, value pairs that the algorithm uses to build the surrogate functionThe only differences are that now our objective function will return a score to minimize (this is just convention in the field of optimization), our domain space will be probability distributions rather than a hyperparameter grid, and the optimization algorithm will be an informed method that uses past results to choose the next hyperparameter values to evaluate. HyperoptHyperopt is an open-source Python library the implements Bayesian Optimization using the Tree Parzen Estimator algorithm to construct the surrogate function and select the next hyperparameter values to evaluate in the objective function. There are a number of other libraries such as Spearmint (Guassian process surrogate function) and SMAC (random forest regression surrogate function) sharing the same problem structure. The four parts of an optimization problem that we develop here will apply to all the libraries with only a change in syntax. Morevoer, the optimization methods as applied to the Gradient Boosting Machine will translate to other machine learning models or any problem where we have to minimize a function. Gradient Boosting MachineWe will use the gradient booosting machine (GBM) as our model to tune in the LightGBM library. Cross Validation with Early StoppingAs with random and grid search, we will evaluate each set of hyperparameters using 3 fold cross validation on the training data. The GBM model will be trained with early stopping, where estimators are added to the ensemble until the validation score has not decrease for 100 iterations (estimators added).Cross validation and early stopping will be implemented using the LightGBM cv function. We will use 3 folds and 100 early stopping rounds.
###Code
# Data manipulation
import pandas as pd
import numpy as np
# Modeling
import lightgbm as lgb
# Evaluation of the model
from sklearn.metrics import roc_auc_score
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['font.size'] = 18
%matplotlib inline
# Governing choices for search
N_FOLDS = 3
MAX_EVALS = 3
###Output
_____no_output_____
###Markdown
The code below reads in the data and creates a training and a set for testing. We can only use the training data a single time when we evaluate the final model. Hyperparameter tuning must be done on the training data using cross validation!
###Code
%%time
emberdf = ember.read_metadata(data_dir)
X_train, y_train, X_test, y_test = ember.read_vectorized_features(data_dir)
lgbm_model = lgb.Booster(model_file=os.path.join(data_dir, "ember_model_2018.txt"))
print('Training data shape: ', X_train.shape)
print('Testing data shape: ', X_test.shape)
###Output
Training data shape: (800000, 2381)
Testing data shape: (200000, 2381)
Wall time: 2.46 s
###Markdown
Baseline ModelFirst we can create a model with the default value of hyperparameters and score it using cross validation with early stopping. Using the cv LightGBM function requires creating a Dataset.
###Code
model = lgb.LGBMClassifier()
# Training set
train_set = lgb.Dataset(X_train, label = y_train)
test_set = lgb.Dataset(X_test, label = y_test)
# Default hyperparamters
hyperparameters = model.get_params()
# Using early stopping to determine number of estimators.
del hyperparameters['n_estimators']
# Perform cross validation with early stopping
cv_results = lgb.cv(hyperparameters, train_set, num_boost_round = 10000, nfold = N_FOLDS, metrics = 'auc',
early_stopping_rounds = 100, verbose_eval = False, seed = 42)
# Highest score
best = cv_results['auc-mean'][-1]
# Standard deviation of best score
best_std = cv_results['auc-stdv'][-1]
print('The maximium ROC AUC in cross validation was {:.3f} with std of {:.3f}.'.format(best, best_std))
print('The ideal number of iterations was {}.'.format(len(cv_results['auc-mean'])))
###Output
C:\Users\fahad\anaconda3\lib\site-packages\lightgbm\basic.py:842: UserWarning: silent keyword has been found in `params` and will be ignored.
Please use silent argument of the Dataset constructor to pass this parameter.
.format(key))
###Markdown
Now we can evaluate the baseline model on the testing data
###Code
%%time
# Optimal number of esimators found in cv
model.n_estimators = len(cv_results['auc-mean'])
# Train and make predicions with model
model.fit(X_train, y_train)
preds = model.predict_proba(X_test)[:, 1]
baseline_auc = roc_auc_score(y_test, preds)
print('The baseline model scores {:.3f} ROC AUC on the test set.'.format(baseline_auc))
###Output
The baseline model scores 0.010 ROC AUC on the test set.
Wall time: 58min 37s
###Markdown
Objective FunctionThe first part to write is the objective function which takes in a set of hyperparameter values and returns the cross validation score on the training data. An objective function in Hyperopt must return either a single real value to minimize, or a dictionary with a key "loss" with the score to minimize (and a key "status" indicating if the run was successful or not). Optimization is typically about minimizing a value, and because our metric is Receiver Operating Characteristic Area Under the Curve (ROC AUC) where higher is better, the objective function will return 1−ROC AUC Cross Validation . The algorithm will try to drive this value as low as possible (raising the ROC AUC) by choosing the next hyperparameters based on the past results.The complete objective function is shown below. Will be written to a csv file on each call of the function in order to track results as the search progress and so we have a saved record of the search.
###Code
%%time
import csv
from hyperopt import STATUS_OK
from timeit import default_timer as timer
def objective(hyperparameters):
"""Objective function for Gradient Boosting Machine Hyperparameter Optimization.
Writes a new line to `outfile` on every iteration"""
# Keep track of evals
global ITERATION
ITERATION += 1
# Using early stopping to find number of trees trained
if 'n_estimators' in hyperparameters:
del hyperparameters['n_estimators']
# Retrieve the subsample
subsample = hyperparameters['boosting_type'].get('subsample', 1.0)
# Extract the boosting type and subsample to top level keys
hyperparameters['boosting_type'] = hyperparameters['boosting_type']['boosting_type']
hyperparameters['subsample'] = subsample
# Make sure parameters that need to be integers are integers
for parameter_name in ['num_leaves', 'subsample_for_bin', 'min_child_samples']:
hyperparameters[parameter_name] = int(hyperparameters[parameter_name])
start = timer()
# Perform n_folds cross validation
cv_results = lgb.cv(hyperparameters, train_set, num_boost_round = 10000, nfold = N_FOLDS,
early_stopping_rounds = 100, metrics = 'auc', seed = 50)
run_time = timer() - start
# Extract the best score
best_score = cv_results['auc-mean'][-1]
# Loss must be minimized
loss = 1 - best_score
# Boosting rounds that returned the highest cv score
n_estimators = len(cv_results['auc-mean'])
# Add the number of estimators to the hyperparameters
hyperparameters['n_estimators'] = n_estimators
# Write to the csv file ('a' means append)
of_connection = open(OUT_FILE, 'a')
writer = csv.writer(of_connection)
writer.writerow([loss, hyperparameters, ITERATION, run_time, best_score])
of_connection.close()
# Dictionary with information for evaluation
return {'loss': loss, 'hyperparameters': hyperparameters, 'iteration': ITERATION,
'train_time': run_time, 'status': STATUS_OK}
###Output
Wall time: 1.96 s
###Markdown
DomainSpecifying the domain (called the space in Hyperopt) is a little trickier than in grid search. In Hyperopt, and other Bayesian optimization frameworks, the domian is not a discrete grid but instead has probability distributions for each hyperparameter. For each hyperparameter, we will use the same limits as with the grid, but instead of being defined at each point, the domain represents probabilities for each hyperparameter. This will probably become clearer in the code and the images!
###Code
from hyperopt import hp
from hyperopt.pyll.stochastic import sample
###Output
_____no_output_____
###Markdown
First we will go through an example of the learning rate. We are using a log-uniform space for the learning rate defined from 0.005 to 0.5. The log - uniform distribution has the values evenly placed in logarithmic space rather than linear space. This is useful for variables that differ over several orders of magnitude such as the learning rate. For example, with a log-uniform distribution, there will be an equal chance of drawing a value from 0.005 to 0.05 and from 0.05 to 0.5 (in linear space far more values would be drawn from the later since the linear distance is much larger. The logarithmic space is exactly the same - a factor of 10).
###Code
%%time
# Create the learning rate
learning_rate = {'learning_rate': hp.loguniform('learning_rate', np.log(0.005), np.log(0.2))}
###Output
Wall time: 1 ms
###Markdown
We can visualize the learning rate by drawing 10000 samples from the distribution.
###Code
%%time
learning_rate_dist = []
# Draw 10000 samples from the learning rate domain
for _ in range(10000):
learning_rate_dist.append(sample(learning_rate)['learning_rate'])
plt.figure(figsize = (8, 6))
sns.kdeplot(learning_rate_dist, color = 'red', linewidth = 2, shade = True);
plt.title('Learning Rate Distribution', size = 18); plt.xlabel('Learning Rate', size = 16); plt.ylabel('Density', size = 16);
###Output
Wall time: 3.48 s
###Markdown
The number of leaves on the other hand is a discrete uniform distribution.
###Code
%%time
# Discrete uniform distribution
num_leaves = {'num_leaves': hp.quniform('num_leaves', 128, 512, 1)}
num_leaves_dist = []
# Sample 10000 times from the number of leaves distribution
for _ in range(10000):
num_leaves_dist.append(sample(num_leaves)['num_leaves'])
# kdeplot
plt.figure(figsize = (8, 6))
sns.kdeplot(num_leaves_dist, linewidth = 2, shade = True);
plt.title('Number of Leaves Distribution', size = 18); plt.xlabel('Number of Leaves', size = 16); plt.ylabel('Density', size = 16);
###Output
Wall time: 3.65 s
###Markdown
Conditional Domain¶In Hyperopt, we can use nested conditional statements to indicate hyperparameters that depend on other hyperparameters. For example, the "goss" boosting_type cannot use subsampling, so when we set up the boosting_type categorical variable, we have to set the subsample to 1.0 while for the other boosting types it's a float between 0.5 and 1.0.
###Code
%%time
# boosting type domain
boosting_type = {'boosting_type': hp.choice('boosting_type',
[{'boosting_type': 'gbdt', 'subsample': hp.uniform('subsample', 0.5, 1)},
{'boosting_type': 'goss', 'subsample': 1.0}])}
# Draw a sample
hyperparams = sample(boosting_type)
hyperparams
###Output
Wall time: 999 µs
###Markdown
We need to set both the boosting_type and subsample as top-level keys in the parameter dictionary. We can use the Python dict.get method with a default value of 1.0. This means that if the key is not present in the dictionary, the value returned will be the default (1.0). One aspect to note is that if boosting_type is goss, then we cannot use subsample (which refers to training on only a fraction of the rows in the training data, a technique known as stochastic gradient boosting). Therefore, we will need a line of logic in our algorithm that sets the subsample to 1.0 (which means use all the rows) if boosting_type=goss. As an example below, if we randomly select a set of hyperparameters, and the boosting type is "goss", then we set the subsample to 1.0.
###Code
%%time
# Retrieve the subsample if present otherwise set to 1.0
subsample = hyperparams['boosting_type'].get('subsample', 1.0)
# Extract the boosting type
hyperparams['boosting_type'] = hyperparams['boosting_type']['boosting_type']
hyperparams['subsample'] = subsample
hyperparams
###Output
Wall time: 0 ns
###Markdown
Usage of nested dictionary with GBMThe gbm cannot use the nested dictionary so we need to set the boosting_type and subsample as top level keys.Nested conditionals allow us to use a different set of hyperparameters depending on other hyperparameters. For example, we can explore different models with completely different sets of hyperparameters by using nested conditionals. The only requirement is that the first nested statement must be based on a choice hyperparameter (the choice could be the type of model). Complete Bayesian Domain¶Now we can define the entire domain. Each variable needs to have a label and a few parameters specifying the type and extent of the distribution. For the variables such as boosting type that are categorical, we use the choice variable. Other variables types include quniform, loguniform, and uniform. For the complete list, check out the documentation for Hyperopt. Altogether there are 12 hyperparameters to optimize.
###Code
%%time
# Define the search space
space = {'boosting_type': hp.choice('boosting_type',
[{'boosting_type': 'gbdt', 'subsample': hp.uniform('gdbt_subsample', 0.5, 1)},
{'boosting_type': 'goss', 'subsample': 1.0}]),
'num_leaves': hp.quniform('num_leaves', 128, 512, 1),
'learning_rate': hp.loguniform('learning_rate', np.log(0.01), np.log(0.5)),
'subsample_for_bin': hp.quniform('subsample_for_bin', 20000, 200000, 20000),
'min_child_samples': hp.quniform('min_child_samples', 20, 500, 5),
'reg_alpha': hp.uniform('reg_alpha', 0.0, 1.0),
'reg_lambda': hp.uniform('reg_lambda', 0.0, 1.0),
'colsample_bytree': hp.uniform('colsample_by_tree', 0.6, 1.0),
'feature_fraction': hp.uniform('feature_fraction',0.5, 1.0),
'bagging_fraction': hp.uniform('bagging_fraction',0.5, 1.0),
'is_unbalance': hp.choice('is_unbalance', [True, False]),
}
space['objective'] = 'binary'
###Output
Wall time: 999 µs
###Markdown
Example of Sampling from the DomainLet's sample from the domain (using the conditional logic) to see the result of each draw. Every time we run this code, the results will change. (Again notice that we need to assign the top level keys to the keywords understood by the GBM).
###Code
%%time
# Sample from the full space
x = sample(space)
# Conditional logic to assign top-level keys
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x
%%time
x = sample(space)
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x
###Output
Wall time: 3 ms
###Markdown
Let's test the objective function with the domain to make sure it works. (Every time the of_connection line is run, the outfile will be overwritten, so use a different name for each trial to save the results.)
###Code
%%time
# Create a new file and open a connection
OUT_FILE = 'bayes_test.csv'
of_connection = open(OUT_FILE, 'w')
writer = csv.writer(of_connection)
ITERATION = 0
# Write column names
headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
writer.writerow(headers)
of_connection.close()
# Test the objective function
results = objective(sample(space))
print('The cross validation loss = {:.3f}.'.format(results['loss']))
print('The optimal number of estimators was {}.'.format(results['hyperparameters']['n_estimators']))
###Output
The cross validation loss = 0.102.
The optimal number of estimators was 827.
Wall time: 14h 41min 29s
###Markdown
Optimization AlgorithmThe optimization algorithm is the method for constructing the surrogate function (probability model) and selecting the next set of hyperparameters to evaluate in the objective function. Hyperopt has two choices: random search and Tree Parzen Estimator.The technical details of TPE can be found in this article(https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf) and a conceptual explanation is in this article(https://towardsdatascience.com/a-conceptual-explanation-of-bayesian-model-based-hyperparameter-optimization-for-machine-learning-b8172278050f). Although this is the most technical part of Bayesian hyperparameter optimization, defining the algorithm in Hyperopt is simple.
###Code
from hyperopt import tpe
# Create the algorithm
tpe_algorithm = tpe.suggest
###Output
_____no_output_____
###Markdown
Results HistoryThe final part is the history of objective function evaluations. Although Hyperopt internally keeps track of the results for the algorithm to use, if we want to monitor the results and have a saved copy of the search, we need to store the results ourselves. Here, we are using two methods to make sure we capture all the results:A Trials object that stores the dictionary returned from the objective functionAdding a line to a csv file every iteration.The csv file option also lets us monitor the results of an on-going experiment. Also, do not use Excel to open the file while training is on-going. Instead check the results using tail results/out_file.csv from bash or open the file in Sublime Text or Notepad.
###Code
%%time
from hyperopt import Trials
# Record results
trials = Trials()
###Output
Wall time: 3.01 ms
###Markdown
The Trials object will hold everything returned from the objective function in the .results attribute. We can use this after the search is complete to inspect the results, but an easier method is to read in the csv file because that will already be in a dataframe.
###Code
%%time
# Create a file and open a connection
OUT_FILE = 'bayes_test.csv'
of_connection = open(OUT_FILE, 'w')
writer = csv.writer(of_connection)
ITERATION = 0
# Write column names
headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
writer.writerow(headers)
of_connection.close()
###Output
Wall time: 2 ms
###Markdown
Automated Hyperparameter Optimization in PracticeWe have all four parts we need to run the optimization. To run Bayesian optimization we use the fmin function (a good reminder that we need a metric to minimize!)
###Code
from hyperopt import fmin
###Output
_____no_output_____
###Markdown
fmin takes the four parts defined above as well as the maximum number of iterations max_evals.
###Code
%%time
# Global variable
global ITERATION
ITERATION = 0
# Run optimization
best = fmin(fn = objective, space = space, algo = tpe.suggest, trials = trials,
max_evals = MAX_EVALS)
best
###Output
100%|██████████| 3/3 [13:37:33<00:00, 16351.28s/trial, best loss: 0.10217363841527549]
Wall time: 13h 37min 33s
###Markdown
The best object holds only the hyperparameters that returned the lowest loss in the objective function. Although this is ultimately what we are after, if we want to understand how the search progresses, we need to inspect the Trials object or the csv file. For example, we can sort the results returned from the objective function by the lowest loss:
###Code
%%time
# Sort the trials with lowest loss (highest AUC) first
trials_dict = sorted(trials.results, key = lambda x: x['loss'])
trials_dict[:1]
###Output
Wall time: 0 ns
###Markdown
An easier method is to read in the csv file since this will be a dataframe.
###Code
%%time
results = pd.read_csv(OUT_FILE)
###Output
Wall time: 150 ms
###Markdown
The function below takes in the results, trains a model on the training data, and evalutes on the testing data. It returns a dataframe of hyperparameters from the search. Saving the results :Saving the results to a csv file converts the dictionary of hyperparameters to a string. We need to map this back to a dictionary using ast.literal_eval.
###Code
%%time
import ast
def evaluate(results, name):
"""Evaluate model on test data using hyperparameters in results
Return dataframe of hyperparameters"""
new_results = results.copy()
# String to dictionary
new_results['hyperparameters'] = new_results['hyperparameters'].map(ast.literal_eval)
# Sort with best values on top
new_results = new_results.sort_values('score', ascending = False).reset_index(drop = True)
# Print out cross validation high score
print('The highest cross validation score from {} was {:.3f} found on iteration {}.'.format(name, new_results.loc[0, 'score'], new_results.loc[0, 'iteration']))
# Use best hyperparameters to create a model
hyperparameters = new_results.loc[0, 'hyperparameters']
model = lgb.LGBMClassifier(**hyperparameters)
# Train and make predictions
model.fit(X_train, y_train)
preds = model.predict_proba(X_test)[:, 1]
print('ROC AUC from {} on test data = {:.3f}.'.format(name, roc_auc_score(y_test, preds)))
# Create dataframe of hyperparameters
hyp_df = pd.DataFrame(columns = list(new_results.loc[0, 'hyperparameters'].keys()))
# Iterate through each set of hyperparameters that were evaluated
for i, hyp in enumerate(new_results['hyperparameters']):
hyp_df = hyp_df.append(pd.DataFrame(hyp, index = [0]),
ignore_index = True)
# Put the iteration and score in the hyperparameter dataframe
hyp_df['iteration'] = new_results['iteration']
hyp_df['score'] = new_results['score']
return hyp_df
%%time
bayes_results = evaluate(results, name = 'Bayesian')
bayes_results
###Output
The highest cross validation score from Bayesian was 0.898 found on iteration 3.
ROC AUC from Bayesian on test data = 0.009.
Wall time: 4h 3min 23s
###Markdown
Continue Optimization¶Hyperopt can continue searching where a previous search left off if we pass in a Trials object that already has results. The algorithms used in Bayesian optimization are black-box optimizers because they have no internal state. All they need is the previous results of objective function evaluations (the input values and loss) and they can build up the surrogate function and select the next values to evaluate in the objective function. This means that any search can be continued as long as we have the history in a Trials object.
###Code
%%time
MAX_EVALS = 3
# Continue training
best = fmin(fn = objective, space = space, algo = tpe.suggest, trials = trials,
max_evals = MAX_EVALS)
###Output
3trial [00:00, 2996.64trial/s, best loss=?]
Wall time: 91.7 ms
###Markdown
To save the Trials object so it can be read in later for more training, we can use the json format.
###Code
%%time
import json
# Save the trial results
with open('trials.json', 'w') as f:
f.write(json.dumps(trials_dict))
###Output
Wall time: 1 ms
###Markdown
To start the training from where it left off, simply load in the Trials object and pass it to an instance of fmin. (You might even be able to tweak the hyperparameter distribution and continue searching with the Trials object because the algorithm does not maintain an internal state. Next StepsNow that we have developed all the necessary parts for automated hyperparameter tuning using Bayesian optimization, we can apply these to any dataset or any machine learning method. The functions taken here can be put in a script and run a full dataset. Next, we will go through results from 100 evaluations on the dataset to see how the search progresses. I will continue running the Bayesian Hyperparameter optimization for 100 iterations below
###Code
%%time
MAX_EVALS = 100
# Create a new file and open a connection
OUT_FILE = 'bayesian_trials_100.csv'
of_connection = open(OUT_FILE, 'w')
writer = csv.writer(of_connection)
# Write column names
headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
writer.writerow(headers)
of_connection.close()
# Record results
trials = Trials()
global ITERATION
ITERATION = 0
best = fmin(fn = objective, space = space, algo = tpe.suggest,
trials = trials, max_evals = MAX_EVALS)
# Sort the trials with lowest loss (highest AUC) first
trials_dict = sorted(trials.results, key = lambda x: x['loss'])
print('Finished, best results')
print(trials_dict[:1])
#Save the trial results
with open('trials.json', 'w') as f:
f.write(json.dumps(trials_dict))
###Output
100%|██████████| 100/100 [671:11:07<00:00, 24162.68s/trial, best loss: 0.10183586177746706]
Finished, best results
[{'loss': 0.10183586177746706, 'hyperparameters': {'bagging_fraction': 0.9354093507612627, 'boosting_type': 'gbdt', 'colsample_bytree': 0.8959538632213808, 'feature_fraction': 0.7800657948373264, 'is_unbalance': True, 'learning_rate': 0.02469412244118082, 'min_child_samples': 145, 'num_leaves': 229, 'objective': 'binary', 'reg_alpha': 0.9880250400277432, 'reg_lambda': 0.4516852511612135, 'subsample_for_bin': 160000, 'subsample': 0.6300497061437562, 'n_estimators': 1227}, 'iteration': 26, 'train_time': 20919.2980698999, 'status': 'ok'}]
Wall time: 27d 23h 11min 7s
###Markdown
Search ResultsNext we will go through the results from 100 search iterations on the dataset. We will look at the scores, the distribution of hyperparameter values tried, the evolution of values over time.After examining the search results, we will use the optimized hyperparameters to make predictions on a the dataset.
###Code
%%time
bayes_results = pd.read_csv(r'C:\Users\fahad\bayesian_trials_100.csv').sort_values('score', ascending = False).reset_index()
random_results = pd.read_csv(r'C:\Users\fahad\random_search_trials_100.csv').sort_values('score', ascending = False).reset_index()
random_results['loss'] = 1 - random_results['score']
bayes_params = evaluate(bayes_results, name = 'Bayesian')
random_params = evaluate(random_results, name = 'random')
###Output
The highest cross validation score from Bayesian was 0.898 found on iteration 26.
ROC AUC from Bayesian on test data = 0.008.
The highest cross validation score from random was 0.898 found on iteration 60.
ROC AUC from random on test data = 0.009.
Wall time: 11h 17min 14s
###Markdown
We can see that the Bayesian search and Random Search are identical in cross validation, however Bayesian search found better hyperparameteres in the test set. We can get all the scores in a dataframe in order to plot them over the course of training.
###Code
%%time
# Dataframe of just scores
scores1 = pd.DataFrame({'ROC AUC': bayes_params['score'], 'iteration': bayes_params['iteration'], 'search': 'Bayes'})
scores2 = pd.DataFrame({'ROC AUC': random_params['score'], 'iteration': random_params['iteration'], 'search': 'Random'})
scores1['ROC AUC'] = scores1['ROC AUC'].astype(np.float32)
scores1['iteration'] = scores1['iteration'].astype(np.int32)
scores2['ROC AUC'] = scores2['ROC AUC'].astype(np.float32)
scores2['iteration'] = scores2['iteration'].astype(np.int32)
scores1.head()
scores2.head()
###Output
Wall time: 4 ms
###Markdown
We can also find the best scores for plotting the best hyperparameter values.
###Code
%%time
best_random_params = scores2.iloc[random_params['score'].idxmax(), :].copy()
best_bayes_params = scores1.iloc[bayes_params['score'].idxmax(), :].copy()
###Output
Wall time: 1 ms
###Markdown
Below is the code showing the progress of scores versus the iteration. For Bayesian optimization, we expect to see the scores increasing with the search as more promising hyperparameter values are tried.
###Code
import seaborn as sns
# Plot of scores over the course of searching
sns.lmplot('iteration', 'ROC AUC', hue = 'search', data = scores, height = 7,legend =None,palette="deep");
plt.scatter(bayes_params['iteration'], bayes_params['score'], marker = '*', s = 100, c = 'orange',edgecolor = 'k')
plt.scatter(random_params['iteration'], random_params['score'], marker = '*', s = 80, c = 'lightblue', edgecolor = 'k')
plt.xlabel('Iteration'); plt.ylabel('ROC AUC'); plt.title("Validation ROC AUC versus Iteration");
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import numpy as np
import matplotlib.pyplot as plt
x = bayes_params['iteration']
y = bayes_params['score']
plt.scatter(x, y, c='orange', label='Bayes')
x = random_params['iteration'],
y = random_params['score']
plt.scatter(x, y, c='lightblue', label='Random')
plt.legend()
plt.title('Validation ROC AUC versus Iteration')
plt.xlabel('iteration')
plt.ylabel('ROC AUC')
#plt.savefig('ScatterPlot_09.png')
plt.show()
###Output
_____no_output_____
###Markdown
Sure enough, we see that the Bayesian hyperparameter optimization scores increase as the search continues. This shows that more promising values were tried as the search progressed. In this case, it looks like if we were to continue searching with Bayesian optimization, we would eventually reach higher scores on the cross vadidation data. Learning Rate DistributionNext we can start plotting the distributions of hyperparameter values searched. We expect random search to align with the search domain, while the Bayesian hyperparameter optimization should tend to focus on more promising values, wherever those happen to be in the search domain.The dashed vertical lines indicate the "optimal" value of the hyperparameter.
###Code
plt.figure(figsize = (20, 8))
plt.rcParams['font.size'] = 18
import seaborn as sns
# Density plots of the learning rate distributions
#sns.kdeplot(learning_rate_dist, label = 'Sampling Distribution', linewidth = 4)
sns.kdeplot(random_params['learning_rate'], label = 'Random Search', linewidth = 4)
sns.kdeplot(bayes_params['learning_rate'], label = 'Bayes Optimization', linewidth = 4)
plt.vlines([best_random_params['learning_rate'], best_bayes_params['learning_rate']],
ymin = 0.0, ymax = 50.0, linestyles = '--', linewidth = 4, colors = ['orange', 'green'])
plt.legend()
plt.xlabel('Learning Rate'); plt.ylabel('Density'); plt.title('Learning Rate Distribution');
###Output
_____no_output_____
###Markdown
Evolution of SearchAn interesting series of plots to make is the evolution of the hyperparameters over the search. This can show us what values the Bayesian optimization tended to focus on. The average cross validation score continued to improve throughout Bayesian optimization, indicating that "more promising" values of the hyperparameters were being evaluated and maybe a longer search would prove useful (or there could be a plateau in the validation scores with a longer search).
###Code
fig, axs = plt.subplots(1, 5, figsize = (24, 8))
i = 0
# Plot of five hyperparameters
for i, hyper in enumerate(['colsample_bytree', 'learning_rate', 'min_child_samples', 'num_leaves','feature_fraction']):
# Scatterplot
sns.regplot('iteration', hyper, data = bayes_params, ax = axs[i])
axs[i].scatter(best_bayes_params['iteration'], best_bayes_params[hyper], marker = '*', s = 200, c = 'k')
axs[i].set(xlabel = 'Iteration', ylabel = '{}'.format(hyper), title = '{} over Search'.format(hyper));
plt.tight_layout()
fig, axs = plt.subplots(1, 5, figsize = (24,8))
i = 0
# Scatterplot of next five hyperparameters
for i, hyper in enumerate(['reg_alpha', 'reg_lambda', 'subsample_for_bin', 'subsample','bagging_fraction']):
sns.regplot('iteration', hyper, data = bayes_params, ax = axs[i])
axs[i].scatter(best_bayes_params['iteration'], best_bayes_params[hyper], marker = '*', s = 200, c = 'k')
axs[i].set(xlabel = 'Iteration', ylabel = '{}'.format(hyper), title = '{} over Search'.format(hyper));
plt.tight_layout()
###Output
_____no_output_____
###Markdown
The final plot is just a bar chart of the boosting_type.
###Code
fig, axs = plt.subplots(1, 2, sharey = True, sharex = True)
# Bar plots of boosting type
random_params['boosting_type'].value_counts().plot.bar(ax = axs[0], figsize = (14, 6), color = 'orange', title = 'Random Search Boosting Type')
bayes_params['boosting_type'].value_counts().plot.bar(ax = axs[1], figsize = (14, 6), color = 'green', title = 'Bayes Optimization Boosting Type');
###Output
_____no_output_____ |
g3doc/tutorials/graph_keras_mlp_cora.ipynb | ###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.x to create an interactive developing environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install --quiet tensorflow==2.0.0-rc0
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to inclue various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors`. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://research.google/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.0.x to create an interactive development environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install tensorflow-gpu==2.0.1
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to include various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization. This value has to be less than or equal to the `max_nbrs` command-line argument used above when running `preprocess_cora_dataset.py`.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors` value. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth label.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
label = features.pop('label')
return features, label
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Select TensorFlow 2.x to create an interactive development environment with eager execution.2. Install the Neural Structured Learning package.
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
!pip install tensorflow-gpu>=2.0.0
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to inclue various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors`. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.x to create an interactive developing environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install --quiet tensorflow==2.0.0-rc0
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to inclue various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors`. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.GraphRegConfig(
neighbor_config=nsl.configs.GraphNeighborConfig(
max_neighbors=HPARAMS.num_neighbors),
multiplier=HPARAMS.graph_regularization_multiplier,
distance_config=nsl.configs.DistanceConfig(
distance_type=HPARAMS.distance_type, sum_over_axis=-1))
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.2.x to create an interactive development environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install --quiet tf-nightly==2.2.0.dev20200119
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to include various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization. This value has to be less than or equal to the `max_nbrs` command-line argument used above when running `preprocess_cora_dataset.py`.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors` value. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth label.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
label = features.pop('label')
return features, label
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.2.x to create an interactive development environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install --quiet tf-nightly==2.2.0.dev20200119
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to inclue various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization. This value has to be less than or equal to the `max_nbrs` command-line argument used above when running `preprocess_cora_dataset.py`.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors`. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Neural Structured Learning Authors
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://research.google/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup Install the Neural Structured Learning package.
###Code
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to include various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization. This value has to be less than or equal to the `max_nbrs` command-line argument used above when running `preprocess_cora_dataset.py`.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors` value. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth label.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
label = features.pop('label')
return features, label
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(hparams.num_classes)(cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(hparams.num_classes)
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____
###Markdown
Copyright 2019 Google LLC
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Graph regularization for document classification using natural graphs View on TensorFlow.org Run in Google Colab View source on GitHub OverviewGraph regularization is a specific technique under the broader paradigm ofNeural Graph Learning([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The coreidea is to train neural network models with a graph-regularized objective,harnessing both labeled and unlabeled data.In this tutorial, we will explore the use of graph regularization to classifydocuments that form a natural (organic) graph.The general recipe for creating a graph-regularized model using the NeuralStructured Learning (NSL) framework is as follows:1. Generate training data from the input graph and sample features. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples. The resulting training data will contain neighbor features in addition to the original node features.2. Create a neural network as a base model using the `Keras` sequential, functional, or subclass API.3. Wrap the base model with the **`GraphRegularization`** wrapper class, which is provided by the NSL framework, to create a new graph `Keras` model. This new model will include a graph regularization loss as the regularization term in its training objective.4. Train and evaluate the graph `Keras` model. Setup 1. Install TensorFlow 2.x to create an interactive developing environment with eager execution.2. Install the Neural Structured Learning package.
###Code
!pip install --quiet tensorflow-gpu==2.0.0-rc0
!pip install --quiet neural-structured-learning
###Output
_____no_output_____
###Markdown
Dependencies and imports
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
###Output
_____no_output_____
###Markdown
Cora datasetThe [Cora dataset](https://linqs.soe.ucsc.edu/data) is a citation graph wherenodes represent machine learning papers and edges represent citations betweenpairs of papers. The task involved is document classification where the goal isto categorize each paper into one of 7 categories. In other words, this is amulti-class classification problem with 7 classes. GraphThe original graph is directed. However, for the purpose of this example, weconsider the undirected version of this graph. So, if paper A cites paper B, wealso consider paper B to have cited A. Although this is not necessarily true, inthis example, we consider citations as a proxy for similarity, which is usuallya commutative property. FeaturesEach paper in the input effectively contains 2 features:1. **Words**: A dense, multi-hot bag-of-words representation of the text in the paper. The vocabulary for the Cora dataset contains 1433 unique words. So, the length of this feature is 1433, and the value at position 'i' is 0/1 indicating whether word 'i' in the vocabulary exists in the given paper or not.2. **Label**: A single integer representing the class ID (category) of the paper. Download the Cora dataset
###Code
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
###Output
_____no_output_____
###Markdown
Convert the Cora data to the NSL formatIn order to preprocess the Cora dataset and convert it to the format required byNeural Structured Learning, we will run the **'preprocess_cora_dataset.py'**script, which is included in the NSL github repository. This script does thefollowing:1. Generate neighbor features using the original node features and the graph.2. Generate train and test data splits containing `tf.train.Example` instances.3. Persist the resulting train and test data in the `TFRecord` format.
###Code
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
###Output
_____no_output_____
###Markdown
Global variablesThe file paths to the train and test data are based on the command line flagvalues used to invoke the **'preprocess_cora_dataset.py'** script above.
###Code
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
###Output
_____no_output_____
###Markdown
HyperparametersWe will use an instance of `HParams` to inclue various hyperparameters andconstants used for training and evaluation. We briefly describe each of thembelow:- **num_classes**: There are a total 7 different classes- **max_seq_length**: This is the size of the vocabulary and all instances in the input have a dense multi-hot, bag-of-words representation. In other words, a value of 1 for a word indicates that the word is present in the input and a value of 0 indicates that it is not.- **distance_type**: This is the distance metric used to regularize the sample with its neighbors.- **graph_regularization_multiplier**: This controls the relative weight of the graph regularization term in the overall loss function.- **num_neighbors**: The number of neighbors used for graph regularization.- **num_fc_units**: The number of fully connected layers in our neural network.- **train_epochs**: The number of training epochs.- **batch_size**: Batch size used for training and evaluation.- **dropout_rate**: Controls the rate of dropout following each fully connected layer- **eval_steps**: The number of batches to process before deeming evaluation is complete. If set to `None`, all instances in the test set are evaluated.
###Code
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
###Output
_____no_output_____
###Markdown
Load train and test dataAs described earlier in this notebook, the input training and test data havebeen created by the **'preprocess_cora_dataset.py'**. We will load them into two`tf.data.Dataset` objects -- one for train and one for test.In the input layer of our model, we will extract not just the 'words' and the'label' features from each sample, but also corresponding neighbor featuresbased on the `hparams.num_neighbors`. Instances with fewer neighbors than`hparams.num_neighbors` will be assigned dummy values for those non-existentneighbor features.
###Code
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above.
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
return features, labels
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
###Output
_____no_output_____
###Markdown
Let's peek into the train dataset to look at its contents.
###Code
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Let's peek into the test dataset to look at its contents.
###Code
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
###Output
_____no_output_____
###Markdown
Model definitionIn order to demonstrate the use of graph regularization, we build a base modelfor this problem first. We will use a simple feed-forward neural network with 2hidden layers and dropout in between. We illustrate the creation of the basemodel using all model types supported by the `tf.Keras` framework -- sequential,functional, and subclass. Sequential base model
###Code
def make_mlp_sequential_model(hparams):
"""Creates a sequential multi-layer perceptron model."""
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
Functional base model
###Code
def make_mlp_functional_model(hparams):
"""Creates a functional API-based multi-layer perceptron model."""
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')(
cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
###Output
_____no_output_____
###Markdown
Subclass base model
###Code
def make_mlp_subclass_model(hparams):
"""Creates a multi-layer perceptron subclass model in Keras."""
class MLP(tf.keras.Model):
"""Subclass model defining a multi-layer perceptron."""
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(
hparams.num_classes, activation='softmax')
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
###Output
_____no_output_____
###Markdown
Create base model(s)
###Code
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
###Output
_____no_output_____
###Markdown
Train base MLP model
###Code
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate base MLP model
###Code
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
"""Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
"""
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
###Output
_____no_output_____
###Markdown
Train MLP model with graph regularizationIncorporating graph regularization into the loss term of an existing`tf.Keras.Model` requires just a few lines of code. The base model is wrapped tocreate a new `tf.Keras` subclass model, whose loss includes graphregularization. To assess the incremental benefit of graph regularization, we will create a newbase model instance. This is because `base_model` has already been trained for afew iterations, and reusing this trained model to create a graph-regularizedmodel will not be a fair comparison for `base_model`.
###Code
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
###Output
_____no_output_____
###Markdown
Evaluate MLP model with graph regularization
###Code
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
###Output
_____no_output_____ |
Sample/Day_13_Sample.ipynb | ###Markdown
* 教學目標 * 正確的從 DataFrame 中插入或刪除資料 * 正確的對 DataFrame 進行合併與重組 * 了解 DataFrame 中合併的方法差異
###Code
# 載入 NumPy, Pandas 套件
import numpy as np
import pandas as pd
# 檢查正確載入與版本
print(np)
print(np.__version__)
print(pd)
print(pd.__version__)
###Output
<module 'numpy' from 'D:\\anaconda3\\lib\\site-packages\\numpy\\__init__.py'>
1.19.2
<module 'pandas' from 'D:\\anaconda3\\lib\\site-packages\\pandas\\__init__.py'>
1.1.3
###Markdown
【基礎13】 從 DataFrame 中插入或刪除資料 * 新增欄位
###Code
df = pd.DataFrame([[1], [2]], columns = ['a'])
print(df)
print('='*20)
df['b'] = pd.Series([3, 4])
print(df)
###Output
a
0 1
1 2
====================
a b
0 1 3
1 2 4
###Markdown
* 新增資料
###Code
df = pd.DataFrame([[1, 2]], columns = ['a', 'b'])
print(df)
print('='*20)
df = df.append(pd.DataFrame([[3, 4]], columns = ['a', 'b']))
print(df)
###Output
a b
0 1 2
====================
a b
0 1 2
0 3 4
###Markdown
* 新增資料 + 調整 index
###Code
df = pd.DataFrame([[1, 2]], columns = ['a', 'b'])
print(df)
print('='*20)
df = df.append(pd.DataFrame([[3, 4]], columns = ['a', 'b']))
df = df.reset_index(drop=True)
print(df)
###Output
a b
0 1 2
====================
a b
0 1 2
1 3 4
###Markdown
* 刪除欄位
###Code
df = pd.DataFrame([[1, 2, 3]], columns = ['a', 'b', 'c'])
print(df)
print('='*20)
del df['a']
df.pop('c')
###Output
a b c
0 1 2 3
====================
###Markdown
* 刪除資料
###Code
df = pd.DataFrame([[1], [2]], columns = ['a'])
print(df)
print('='*20)
df = df.drop(1)
print(df)
###Output
a
0 1
1 2
====================
a
0 1
###Markdown
DataFrame 的合併與重組 * Concat
###Code
one = pd.DataFrame({
'id':[1, 2],
'Name': ['Alex', 'Amy'],
})
two = pd.DataFrame({
'id':[1, 2],
'Name': ['Bob', 'Tom']
})
pd.concat([one, two])
one = pd.DataFrame({
'id':[1, 2],
'Name': ['Alex', 'Amy'],
})
two = pd.DataFrame({
'id':[1, 2],
'Name': ['Bob', 'Tom']
})
pd.concat([one, two]).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
* Merge
###Code
one = pd.DataFrame({
'id':[1, 2],
'Name': ['Alex', 'Amy'],
})
two = pd.DataFrame({
'id':[1, 2],
'Score': [98, 60]
})
pd.merge(one, two, on='id')
###Output
_____no_output_____
###Markdown
* Join
###Code
one = pd.DataFrame({
'Name': ['Alex', 'Amy'],
})
two = pd.DataFrame({
'Score': [98, 60]
})
one.join(two)
###Output
_____no_output_____
###Markdown
* Group By
###Code
df = pd.DataFrame({
'A' : ['foo', 'bar', 'foo', 'bar'],
'B' : ['one', 'one', 'two', 'three'],
'C' : [1,2,3,4],
'D' : [10, 20, 30, 40]
})
print( df.groupby('A').sum() )
print('='*20)
print( df.groupby('A').agg(sum) )
print('='*20)
print( df.groupby(['A','B']).sum() )
###Output
_____no_output_____ |
practice/get-data-from-mongo.ipynb | ###Markdown
Basic Query
###Code
query = {}
results = medical_notes_kaggle_db.train.find(query).limit(2)
data = list(results)
data
###Output
_____no_output_____ |
paper_retrieval/evaluation_notebooks/sent2vec_evaluation.ipynb | ###Markdown
Sent2Vec Evaluation This notebook contains the evaluation of the Sent2Vec document embeddings for paper retrieval.Multiple models were trained but in the end the one pretrained on the wikipedia data resulted in the best performance and was chosen for the final system. Additionally pseudo relevance feedback and ontology query expansion are tested out of which the ontology query expansion lead to improvements.
###Code
%load_ext autoreload
%autoreload 2
import json
import sys
import os
import pickle
import logging
logging.basicConfig(level=logging.INFO, handlers=[logging.FileHandler("sent2vec.log"), logging.StreamHandler(sys.stdout)])
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.expand_frame_repr', False)
pd.set_option('max_colwidth', 1000)
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
from tqdm.notebook import tqdm
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
tqdm.pandas()
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
from evaluation import *
from preprocessing import Corpus, BasicPreprocessing, BigramPreprocessor, SpacyPreprocessor, StopWordPreprocessor
from retrieval_algorithms.sent2vec_retrieval_algorithm import Sent2VecRetrievalAlgorithm
from retrieval_algorithms.prf_wrapper import PRFWrapper
from retrieval_algorithms.ontology_expansion_wrapper import OntologyExpansionWrapper
!pipenv install Cython
!git clone https://github.com/epfml/sent2vec.git
!cd sent2vec && make
!cd sent2vec && pipenv run pip install .
###Output
_____no_output_____
###Markdown
Load corpus using different preprocessing pipelines
###Code
base_file = "../../data/kit_expert_2019_all_papers.csv"
p = [BasicPreprocessing(), StopWordPreprocessor()]
papers_basic = Corpus(base_file, p)
###Output
INFO:preprocessing.pipeline:Start preprocessing pipeline "basic_NoStopWords" for file ../../data/kit_expert_2019_all_papers.csv.
INFO:preprocessing.pipeline:Loaded cached preprocessed corpus from ../../data/kit_expert_2019_all_papers_basic_NoStopWords
###Markdown
Load keywords to use as test data
###Code
with open("../../data/kit_expert_2019_all_keywords.json", "r") as file:
keywords = json.load(file)
general_keywords = [k for k in keywords if k["level"]<=1]
specific_keywords = [k for k in keywords if k["level"]>=2 and len(k["paper_ids"])>=10]
general_keywords_val = ("general keywords validation", general_keywords[0:int(len(general_keywords)*0.8)])
specific_keywords_val = ("specific keywords validation", specific_keywords[0:int(len(specific_keywords)*0.8)])
general_keywords_test = ("general keywords test", general_keywords[int(len(general_keywords)*0.8):])
specific_keywords_test = ("specific keywords test", specific_keywords[int(len(specific_keywords)*0.8):])
papers_basic.data.str.join(" ").to_csv("sentences.txt", index=False, header=False)
###Output
_____no_output_____
###Markdown
Train sent2vec models
###Code
!mkdir ../../data/models/sent2vec/sent2vec_1000_60epoch
!./sent2vec/fasttext sent2vec -input sentences.txt -output ../../data/models/sent2vec/sent2vec_1000_60epoch/model -minCount 2 -dim 1000 -epoch 60 -lr 0.4 -wordNgrams 2 -loss ns -neg 20 -thread 20 -numCheckPoints 10
r=Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/sent2vec_700_120epoch_neg5/model_Chk10.ckpt.bin",False)
r.prepare(papers_basic)
sent2vec_pretrained_models = [
("sent2vec torontobooks unigram", Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/torontobooks_unigrams.bin",False), papers_basic),
("sent2vec torontobooks bigram", Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/torontobooks_bigrams.bin",False), papers_basic),
("sent2vec wikipedia unigram", Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/wiki_unigrams.bin",False), papers_basic),
("sent2vec wikipedia bigram", Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/wiki_bigrams.bin",False), papers_basic),
]
sent2vec_pretrained_results = train_evaluate_models(sent2vec_pretrained_models, [general_keywords_val, specific_keywords_val], n_jobs=4)
sent2vec_pretrained_results.to_csv("../../data/results/sent2vec_pretrained_results.csv")
sent2vec_pretrained_results = pd.read_csv("../../data/results/sent2vec_pretrained_results.csv", index_col=0, header=[0,1,2])
sent2vec_pretrained_results
sent2vec_epoch_models = [(f"sent2vec epoch={i*1}", Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/sent2vec_1000_40epoch_neg20/model_Chk{i}.ckpt.bin",False), papers_basic) for i in range(1,40)]
sent2vec_epoch_results = train_evaluate_models(sent2vec_epoch_models, [general_keywords_val, specific_keywords_val], n_jobs=5)
sent2vec_epoch_results.to_csv("../../data/results/sent2vec_1000_40epoch_neg20.csv")
sent2vec_epoch_results = pd.read_csv("../../data/results/sent2vec_1000_40epoch_neg20.csv", index_col=0, header=[0,1,2])
plot_data = sent2vec_epoch_results.xs('mAP', level=1, axis=1).xs('avg', level=1, axis=1)
err_data = sent2vec_epoch_results.xs('mAP', level=1, axis=1).xs('err', level=1, axis=1)
plot_data.index = range(1,41)
ax = plot_data.iloc[:,1].plot(label="specific queries", figsize=(12,6), style="-o", legend=True, xlim=(1,40), ylim=(0.0,0.29), yticks=np.arange(0,0.30,0.025))
ax = plot_data.iloc[:,0].plot(label="general queries", figsize=(12,6), style="-o", legend=True, xlim=(1,40), ylim=(0.0,0.29))
ax.set_ylabel("mAP");
ax.set_xlabel("training epoch")
ax.legend(loc="center right")
# plt.fill_between(plot_data.index, plot_data.iloc[:,1].values-err_data.iloc[:,1].values, plot_data.iloc[:,1].values+err_data.iloc[:,1].values,
# alpha=0.4, edgecolor=sns.color_palette("Blues")[3], facecolor=sns.color_palette("Blues")[1], linewidth=1)
# plt.fill_between(plot_data.index, plot_data.iloc[:,0].values-err_data.iloc[:,0].values, plot_data.iloc[:,0].values+err_data.iloc[:,0].values,
# alpha=0.4, edgecolor=sns.color_palette("Oranges")[3], facecolor=sns.color_palette("Oranges")[1], linewidth=1)
plt.savefig("images/sent2vec_epoch_search.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Pseudo relevance feedback
###Code
prf_grid = [(150,200,np.round(i,3)) for i in np.linspace(0,1,21)]
search_prf_models = [(f"prf nrd={nrd:.2f} net={net:.2f} ew={ew:.2f}",
PRFWrapper(Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/wiki_bigrams.bin", False), nrd, net, ew),
papers_basic) for nrd, net, ew in prf_grid]
search_prf_results = train_evaluate_models(search_prf_models, [general_keywords_val, specific_keywords_val], n_jobs=6)
search_prf_results.to_csv("../../data/results/sent2vec_wiki_search_prf_results.csv")
search_prf_results = pd.read_csv("../../data/results/sent2vec_wiki_search_prf_results.csv", index_col=0, header=[0,1,2])
plot_data = search_prf_results.xs('mAP', level=1, axis=1).xs('avg', level=1, axis=1)
err_data = search_prf_results.xs('mAP', level=1, axis=1).xs('err', level=1, axis=1)
plot_data.index = np.linspace(0,1,21)
ax = plot_data.iloc[:,1].plot(label="specific queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), ylim=(0.0,0.3))
ax = plot_data.iloc[:,0].plot(label="general queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1),ylim=(0.0,0.3))
ax.set_ylabel("mAP");
ax.set_xlabel("weight λ")
# plt.fill_between(plot_data.index, plot_data.iloc[:,1].values-err_data.iloc[:,1].values, plot_data.iloc[:,1].values+err_data.iloc[:,1].values,
# alpha=0.4, edgecolor=sns.color_palette("Blues")[3], facecolor=sns.color_palette("Blues")[1], linewidth=1)
# plt.fill_between(plot_data.index, plot_data.iloc[:,0].values-err_data.iloc[:,0].values, plot_data.iloc[:,0].values+err_data.iloc[:,0].values,
# alpha=0.4, edgecolor=sns.color_palette("Oranges")[3], facecolor=sns.color_palette("Oranges")[1], linewidth=1)
plt.savefig("images/sent2vec_prf.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Ontology query expansion
###Code
with open("../../data/keyword_hierarchy.json", 'r') as file:
keyword_hierarchy = json.load(file)
oqe_grid = [np.round(i,3) for i in np.linspace(0,1,21)]
search_oqe_models = [(f"ontology expansion wrapper w={ew}", OntologyExpansionWrapper(Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/wiki_bigrams.bin", False), keyword_hierarchy, True, ew), papers_basic) for ew in oqe_grid]
search_oqe_results = train_evaluate_models(search_oqe_models, [general_keywords_val, specific_keywords_val], n_jobs=2)
search_oqe_results.to_csv("../../data/results/sent2vec_wiki_search_oqe_results.csv")
search_oqe_results = pd.read_csv("../../data/results/sent2vec_wiki_search_oqe_results.csv", index_col=0, header=[0,1,2])
plot_data = search_oqe_results.xs('mAP', level=1, axis=1).xs('avg', level=1, axis=1)
err_data = search_oqe_results.xs('mAP', level=1, axis=1).xs('err', level=1, axis=1)
plot_data.index = np.linspace(0,1,21)
ax = plot_data.iloc[:,1].plot(label="specific queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), ylim=(0.0,0.3))
ax = plot_data.iloc[:,0].plot(label="general queries", figsize=(12,5), style="-o", legend=True, xticks=[0]+np.linspace(0,1,11), xlim=(0,1), ylim=(0.0,0.3))
ax.set_ylabel("mAP");
ax.set_xlabel("weight λ")
ax.legend(loc="upper right")
# plt.fill_between(plot_data.index, plot_data.iloc[:,1].values-err_data.iloc[:,1].values, plot_data.iloc[:,1].values+err_data.iloc[:,1].values,
# alpha=0.4, edgecolor=sns.color_palette("Blues")[3], facecolor=sns.color_palette("Blues")[1], linewidth=1)
# plt.fill_between(plot_data.index, plot_data.iloc[:,0].values-err_data.iloc[:,0].values, plot_data.iloc[:,0].values+err_data.iloc[:,0].values,
# alpha=0.4, edgecolor=sns.color_palette("Oranges")[3], facecolor=sns.color_palette("Oranges")[1], linewidth=1)
plt.savefig("images/sent2vec_oqe.pdf", transparent=True, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Save best model
###Code
sent2vec_oqe_model = OntologyExpansionWrapper(Sent2VecRetrievalAlgorithm(f"../../data/models/sent2vec/wiki_bigrams.bin", False), keyword_hierarchy, True, 0.5)
sent2vec_oqe_model.prepare(papers_basic)
file_path = "../../data/models/sent2vec/sent2vec_oqe.model"
with open(file_path, "wb") as file:
pickle.dump(sent2vec_oqe_model, file)
###Output
_____no_output_____ |
LanguageModeling/T5/train_T5.ipynb | ###Markdown
Huggingface SageMaker-SDK - T5 Language Modeling example IntroductionこのnotebookはSageMakerのHuggingFaceコンテナを使用して、言語モデル(ここではT5を想定)の学習を行います。 以下を参考にしています- https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling- https://www.ogis-ri.co.jp/otc/hiroba/technical/similar-document-search/part7.html- https://github.com/megagonlabs/t5-japanese- https://arxiv.org/abs/1910.10683- https://github.com/google-research/text-to-text-transfer-transformergpu-usage
###Code
!pip install --upgrade pip
!pip install tensorflow
!pip install tensorflow-datasets==4.4.0
!pip install transformers tokenizers datasets
!pip install jax>=0.2.8
!pip install jaxlib>=0.1.59
!pip install flax>=0.3.5
!pip install optax>=0.0.9
!pip install torch==1.10.0+cpu torchvision==0.11.1+cpu torchaudio==0.10.0+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
!pip install sentencepiece==0.1.96
###Output
_____no_output_____
###Markdown
configファイルを言語モデル用のディレクトリに保存する
###Code
from transformers import T5Config
config = T5Config.from_pretrained("google/t5-v1_1-base", vocab_size=32000)
config.save_pretrained("./src/japanese-t5-base")
###Output
_____no_output_____
###Markdown
前工程で作成したTokenizerが動作するか確認する
###Code
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("./src/japanese-t5-base")
tokenizer.tokenize("私は元気です。あなたは元気ですか?")
###Output
_____no_output_____
###Markdown
このサンプルノートブックではwiki40bを使用します。
###Code
# https://note.com/npaka/n/n0a2d0a4b806e
import os
import tensorflow_datasets as tfds
ds = tfds.load('wiki40b/ja', split='train', try_gcs=True)
# データセットをテキスト形式で出力する関数
def create_txt(file_name, tf_data):
start_paragraph = False
# ファイルの書き込み
with open(file_name, 'w') as f:
for wiki in tf_data.as_numpy_iterator():
for text in wiki['text'].decode().split('\n'):
if start_paragraph:
text = text.replace('_NEWLINE_', '') # _NEWLINE_は削除
f.write(text + '\n')
start_paragraph = False
if text == '_START_PARAGRAPH_': # _START_PARAGRAPH_のみ取得
start_paragraph = True
# データセットをテキスト形式で出力
create_txt('wiki_40b.txt', ds)
from datasets import load_dataset
dataset = load_dataset('text', data_files='wiki_40b.txt', cache_dir="./")
dataset
dataset['train'][0]
###Output
_____no_output_____
###Markdown
データをS3へアップロードします。
###Code
#import botocore
from datasets.filesystems import S3FileSystem
import sagemaker
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
dataset['train'].to_json('train.json', force_ascii=False)
s3_prefix = 'samples/datasets/wiki40b-jp'
input_train = sess.upload_data(
path='train.json',
key_prefix=f'{s3_prefix}/train'
)
print(input_train)
###Output
_____no_output_____
###Markdown
Starting Sagemaker Training Job`HuggingFace`のトレーニングジョブを作成するためにはHuggingFace Estimatorが必要になります。Estimatorは、エンドツーエンドのAmazonSageMakerトレーニングおよびデプロイタスクを処理します。 Estimatorで、どのスクリプトをentry_pointとして使用するか、どのinstance_typeを使用するか、どのhyperparametersを渡すかなどを定義します。```pythonhuggingface_estimator = HuggingFace( entry_point='run_t5_mlm_flax.py', source_dir='./src', instance_type='ml.p4d.24xlarge', instance_count=1, transformers_version='4.11', pytorch_version='1.9', tensorflow_version='2.5', py_version='py37', role=role, hyperparameters=hyperparameters,)```SageMakerトレーニングジョブを作成すると、SageMakerはhuggingfaceコンテナを実行するために必要なec2インスタンスの起動と管理を行います。`run_t5_mlm_flax.py`をアップロードし、sagemaker_session_bucketからコンテナ内の/opt/ml/input/dataにデータをダウンロードして、トレーニングジョブを実行します。HuggingFace estimatorで定義したhyperparametersは、名前付き引数として渡されます。またSagemakerは、次のようなさまざまな環境変数を通じて、トレーニング環境に関する有用なプロパティを提供しています。- `SM_MODEL_DIR`:トレーニングジョブがモデルアーティファクトを書き込むパスを表す文字列。トレーニング後、このディレクトリのアーティファクトはモデルホスティングのためにS3にアップロードされます。- `SM_NUM_GPUS`:ホストで使用可能なGPUの数を表す整数。- `SM_CHANNEL_XXXX`:指定されたチャネルの入力データを含むディレクトリへのパスを表す文字列。たとえば、HuggingFace estimatorのfit呼び出しでtrainとtestという名前の2つの入力チャネルを指定すると、環境変数SM_CHANNEL_TRAINとSM_CHANNEL_TESTが設定されます。このトレーニングジョブをローカル環境で実行するには、`instance_type='local'`、GPUの場合は`instance_type='local_gpu'`で定義できます(GPUの場合は追加で設定が必要になります[SageMakerのドキュメント](https://sagemaker.readthedocs.io/en/stable/overview.htmllocal-mode)を参照してください)。**Note:これはSageMaker Studio内では機能しません**
###Code
from sagemaker.huggingface import HuggingFace
# hyperparameters, which are passed into the training job
hyperparameters={
'model_type':'t5',
'train_file': '/opt/ml/input/data/train/train.json',
#'validation_file': '/opt/ml/input/data/validation/dev.csv',
#'test_file': '/opt/ml/input/data/test/test.csv',
'config_name':'./japanese-t5-base',
'tokenizer_name': './japanese-t5-base',
'max_seq_length': 512,
'per_device_train_batch_size': 32,
'per_device_eval_batch_size': 32,
'adafactor': 'True',
'learning_rate': 0.001,
'weight_decay': 0.001,
'warmup_steps': 100,
'overwrite_output_dir': 'True',
'preprocessing_num_workers': 96,
'num_train_epochs': 1,
#'logging_strategy': 'epoch',
#'save_strategy': 'epoch',
#'evaluation_strategy': 'epoch',
'logging_steps': 200,
'save_steps': 500,
'eval_steps': 500,
'output_dir':'/opt/ml/model',
'push_to_hub':'False'
}
###Output
_____no_output_____
###Markdown
ここではサンプルのため、学習は1 epochのみとします。ml.p4d.24xlargeで30min程度で完了します。
###Code
#distribution = {'smdistributed':{'dataparallel':{ 'enabled': True }}}
# estimator
huggingface_estimator = HuggingFace(
role=role,
entry_point='run_t5_mlm_flax.py',
source_dir='./src',
instance_type='ml.p4d.24xlarge',
instance_count=1,
max_run=60*60*24*5,
volume_size=500,
transformers_version='4.11',
#pytorch_version='1.9',
tensorflow_version='2.5',
py_version='py37',
hyperparameters=hyperparameters,
distribution=None
)
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({'train': input_train})
###Output
_____no_output_____
###Markdown
Download-model-from-s3
###Code
import os
OUTPUT_DIR = './output/'
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
from sagemaker.s3 import S3Downloader
# 学習したモデルのダウンロード
S3Downloader.download(
s3_uri=huggingface_estimator.model_data, # s3 uri where the trained model is located
local_path='.', # local path where *.targ.gz is saved
sagemaker_session=sess # sagemaker session used for training the model
)
# OUTPUT_DIRに解凍します
!tar -zxvf model.tar.gz -C output
###Output
_____no_output_____
###Markdown
(Optional) Convert from Flax Model to PyTorch Model
###Code
from transformers import T5Tokenizer, T5ForConditionalGeneration, FlaxT5ForConditionalGeneration
mdl_path = "./output"
pt_model = T5ForConditionalGeneration.from_pretrained(mdl_path, from_flax=True)
pt_model.save_pretrained(mdl_path)
tokenizer = T5Tokenizer.from_pretrained(mdl_path)
###Output
_____no_output_____ |
source-code/keras/Flatland/data_generation.ipynb | ###Markdown
Geometric object generator Required modules Load the `autoreload` iPython extension to ensure module reloads during development.
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load the required modules
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import random
###Output
_____no_output_____
###Markdown
Load the module that implements the object generation for demonstration purposes. It can also be run directory from the command line.
###Code
import figures
###Output
_____no_output_____
###Markdown
Generators Currently, three generators are defined for circles, squares and equilateral triangles. Set the width and the height of the images, and the maximum size of the geometric objects, expressed in points.
###Code
width, height = 100, 100
max_size = 25
###Output
_____no_output_____
###Markdown
The generator for circles, and an example.
###Code
circle_gen = figures.CircleGenerator(width, height, max_size)
fig = circle_gen.create()
plt.imshow(fig.data, cmap='gray');
###Output
_____no_output_____
###Markdown
The generator for squares, and an example.
###Code
square_gen = figures.SquareGenerator(width, height, max_size)
fig = square_gen.create()
plt.imshow(fig.data, cmap='gray');
###Output
_____no_output_____
###Markdown
The generator for equilateral triangles, and an example.
###Code
triangle_gen = figures.TriangleGenerator(width, height, max_size)
fig = triangle_gen.create()
plt.imshow(fig.data, cmap='gray');
###Output
_____no_output_____
###Markdown
Transformations A transformation can be scaling the geometric object (down), rotating it, shifting it and performing a Gaussian blur.
###Code
transformer = figures.FigureTransformer(width=width, height=height,
min_size=10, max_size=max_size,
center_margin=0.3,
blur_factor=3.0)
###Output
_____no_output_____
###Markdown
Execture some random scalings on a square.
###Code
nr_plots = 6
figure, axes = plt.subplots(ncols=nr_plots, nrows=1, figsize=(12, 4))
for i in range(nr_plots):
fig = square_gen.create()
transformer.scale(fig)
axes[i].get_xaxis().set_visible(False)
axes[i].get_yaxis().set_visible(False)
axes[i].imshow(fig.data, cmap='gray')
###Output
_____no_output_____
###Markdown
Execute some random rotations on a square.
###Code
nr_plots = 6
figure, axes = plt.subplots(ncols=nr_plots, nrows=1, figsize=(12, 4))
for i in range(nr_plots):
fig = square_gen.create()
transformer.rotate(fig)
axes[i].get_xaxis().set_visible(False)
axes[i].get_yaxis().set_visible(False)
axes[i].imshow(fig.data, cmap='gray')
###Output
_____no_output_____
###Markdown
Execute some random shifts on a square.
###Code
nr_plots = 6
figure, axes = plt.subplots(ncols=nr_plots, nrows=1, figsize=(12, 4))
for i in range(nr_plots):
fig = square_gen.create()
transformer.shift(fig)
axes[i].get_xaxis().set_visible(False)
axes[i].get_yaxis().set_visible(False)
axes[i].imshow(fig.data, cmap='gray')
###Output
_____no_output_____
###Markdown
Execute some random blurs on a triangle.
###Code
nr_plots = 6
figure, axes = plt.subplots(ncols=nr_plots, nrows=1, figsize=(12, 4))
for i in range(nr_plots):
fig = triangle_gen.create()
transformer.blur(fig)
axes[i].get_xaxis().set_visible(False)
axes[i].get_yaxis().set_visible(False)
axes[i].imshow(fig.data, cmap='gray')
###Output
_____no_output_____
###Markdown
Generate random geometric objects, with all transformations applied.
###Code
nrows, ncols = 5, 10
generators = [triangle_gen, square_gen, circle_gen]
figure, axes = plt.subplots(ncols=ncols, nrows=nrows, figsize=(12, 6))
for i in range(nrows):
for j in range(ncols):
generator = random.choice(generators)
fig = generator.create()
transformer.transform(fig)
axes[i, j].get_xaxis().set_visible(False)
axes[i, j].get_yaxis().set_visible(False)
axes[i, j].imshow(fig.data, cmap='gray')
###Output
_____no_output_____
###Markdown
HDF5 files The script part of `figures` will write an HDF5 file with the data. The input dataset is called `x_values`, the output dataset `y_values`.
###Code
import h5py
from pathlib import Path
import subprocess
nrows, ncols = 4, 5
h5_path = Path('data.h5')
if not h5_path.exists():
print(f'generating data file {h5_path.as_posix()}')
subprocess.call(['./generate_images.py', '--n', str(nrows*ncols),
'--verbose', h5_path.as_posix()])
object_types = ['circle', 'square', 'triangle']
figure, axes = plt.subplots(ncols=ncols, nrows=nrows, figsize=(12, 6))
with h5py.File(str(h5_path), 'r') as h5_file:
for i in range(nrows):
for j in range(ncols):
index = i*ncols + j
fig = h5_file['x_values'][index]
axes[i, j].get_xaxis().set_visible(False)
axes[i, j].get_yaxis().set_visible(False)
object_id = np.argmax(h5_file['y_values'][index])
axes[i, j].set_title(object_types[object_id])
axes[i, j].imshow(fig, cmap='gray')
figure.tight_layout()
###Output
generating data file data.h5
###Markdown
Multiple objects per image
###Code
nrows, ncols = 4, 5
h5_path = Path('multiple_objects_data.h5')
if not h5_path.exists():
subprocess.call(['./generate_images.py', '--n', str(nrows*ncols),
'--min_size', '5', '--max_size', '10', '--center_margin', '0.2',
'--max_objects', '5', h5_path.as_posix()])
def generate_title(counts):
names = ['c', 's', 't']
strs = map(lambda x: f'{x[0]}: {x[1]:d}', zip(names, counts))
return ', '.join(strs)
figure, axes = plt.subplots(ncols=ncols, nrows=nrows, figsize=(12, 6))
with h5py.File(h5_path.as_posix(), 'r') as h5_file:
for i in range(nrows):
for j in range(ncols):
index = i*ncols + j
fig = h5_file['x_values'][index]
axes[i, j].get_xaxis().set_visible(False)
axes[i, j].get_yaxis().set_visible(False)
title = generate_title(h5_file['y_values'][index])
axes[i, j].set_title(title)
axes[i, j].imshow(fig, cmap='gray')
figure.tight_layout()
###Output
_____no_output_____ |
FinalIDClusters (1) (2).ipynb | ###Markdown
Project: Identify Customer SegmentsIn this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. **Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks.** The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. **There will be times in the project where you will need to make and justify your own decisions on how to treat the data.** These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.At the end of most sections, there will be a Markdown cell labeled **Discussion**. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. **Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.**
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
import pprint
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import operator
from sklearn.preprocessing import LabelEncoder
import re
# magic word for producing visualizations in notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 0: Load the DataThere are four files associated with this project (not including this one):- `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).- `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).- `Data_Dictionary.md`: Detailed information file about the features in the provided datasets.- `AZDIAS_Feature_Summary.csv`: Summary of feature attributes for demographics data; 85 features (rows) x 4 columnsEach row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the `.csv` data files in this project: they're semicolon (`;`) delimited, so you'll need an additional argument in your [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.
###Code
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv', delimiter=';')
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
num_rows, num_cols = azdias.shape
print('Number of columns: {}'.format(num_cols))
print('Number of rows: {}'.format(num_rows))
azdias.head(5)
#azdias.describe()
###Output
Number of columns: 85
Number of rows: 891221
###Markdown
> **Tip**: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut `esc --> a` (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and `esc --> b` adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use `esc --> m` and to convert to a code cell, use `esc --> y`. Step 1: Preprocessing Step 1.1: Assess Missing DataThe feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the **Discussion** cell with your findings and decisions at the end of each step that has one! Step 1.1.1: Convert Missing Value Codes to NaNsThe fourth column of the feature attributes summary (loaded in above as `feat_info`) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. `[-1,0]`), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.**As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.**
###Code
feat_info.head()
# Identify missing or unknown data values and convert them to NaNs.
values = ['-1','0','1','2','3','4','5','6','7','8','9']
for column in range(85):
missing_values = feat_info.iloc[column][3]
missing_values = missing_values.strip('[')
missing_values = missing_values.strip(']')
missing_values = missing_values.split(sep=',')
for i in range(len(missing_values)):
if missing_values[i] in values:
missing_values[i] = int(missing_values[i])
if(missing_values!=['']):
azdias = azdias.replace({feat_info.iloc[column][0]: missing_values}, np.nan)
print('Total number of missing values after identifying missing or unknown and converting them to NaNs: {}'.format(azdias.isnull().sum().sum()))
###Output
Total number of missing values after identifying missing or unknown and converting them to NaNs: 8373929
###Markdown
Step 1.1.2: Assess Missing Data in Each ColumnHow much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)For the remaining features, are there any patterns in which columns have, or share, missing data?
###Code
# Perform an assessment of how much missing data there is in each column of the
# dataset.
missing_values_per_feature =(azdias.isnull().sum()/891221).sort_values(ascending=True)*100
missing_values_per_feature
# Investigate patterns in the amount of missing data in each column.
plt.hist(missing_values_per_feature,bins=50)
plt.title("Percentage of Missing Values Per feature");
plt.ylabel("Number of features");
plt.xlabel("Percentage missing");
plt.xticks(np.arange(0, 101, step=4))
plt.grid()
missing_values_per_feature.plot.bar(figsize=(15,10))
plt.xlabel('Column name with missing values')
plt.ylabel('Percentage of missing values')
plt.show()
print("There are {} columns with missing values.".format(len(missing_values_per_feature)))
###Output
_____no_output_____
###Markdown
As we can see that columns with missing values percentage greater than 18% are outlier, So, we will remove those columns.
###Code
# Remove the outlier columns from the dataset. (I'll perform other data
# engineering tasks such as re-encoding and imputation later.)
outlier_column = []
for column in range(85):
current_column=missing_values_per_feature.index[column]
percent_missing_current_column=missing_values_per_feature[column]
if percent_missing_current_column >18:
outlier_column.append(current_column)
#printing the outlier column
outlier_column
###Output
_____no_output_____
###Markdown
We can see that there are 6 outlier columns. Let's drop them.
###Code
#removing the outlier column from the dataset
azdias = azdias.drop(columns=outlier_column)
###Output
_____no_output_____
###Markdown
Discussion 1.1.2: Assess Missing Data in Each ColumnLooking at the hist plot, I have decided to filter any columns missing greater than 200K entries. Out of the 85 columns, most columns have less than 200K missing entries, while 6 columns have a much greater number. Filtering any column with more than 200K missing values, I removed the outlier columns from the data.There are also patterns in data with similar names having same number of missing entries such as LP_FAMILIE_FEIN and LP_FAMILIE_GROB.I have decided drop colums with more than 18% missing value.Almost all of the columns have missing values ranging from 0.5% to 99%. Looking at the distribution of missing value counts and the bar plot above, 5 columns have about 0.40% missing values and the missing value percentage keeps increasing and is around 15% for most of the columns in the bar plot above. It is obvious from the distribution and the bar plot that the last 6 columns are outliers so I removed them from the data frame. The removed columns are :'TITEL_KZ', 'AGER_TYP', 'KK_KUNDENTYP', 'KBA05_BAUMAX', 'GEBURTSJAHR', 'ALTER_HH' Step 1.1.3: Assess Missing Data in Each RowNow, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.- You can use seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) function to create a bar chart of code frequencies and matplotlib's [`subplot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html) function to put bar charts for the two subplots side by side.- To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared.Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. **Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.**
###Code
# How much data is missing in each row of the dataset?
plt.hist(azdias.isnull().sum(axis=1),bins=50)
plt.title("Percentage of Missing Values Per feature");
plt.ylabel("Number of rows");
plt.xlabel("Number of features missing");
plt.xticks(np.arange(0, 51, step=2))
plt.grid()
# Write code to divide the data into two subsets based on the number of missing
# values in each row.
azdias_low = azdias[azdias.isnull().sum(axis=1) <= 2]
azdias_high = azdias[azdias.isnull().sum(axis=1) > 2]
#Finding the columns having zero missing values
zero_missing=[]
for i in range(79):
if(missing_values_per_feature[i]==0):
zero_missing.append(missing_values_per_feature.index[i])
zero_missing
# Compare the distribution of values for at least five columns where there are
# no or few missing values, between the two subsets.
fig, axs = plt.subplots(10, figsize=(10,19))
fig.subplots_adjust(hspace = 2, wspace=.2)
for i in range(10):
if(i%2==0):
sns.countplot(azdias_low[zero_missing[i]], ax=axs[i])
axs[i].set_title('few features have missing values')
sns.countplot(azdias_high[zero_missing[i]], ax=axs[i+1])
axs[i+1].set_title('more features have missing values')
continue
###Output
_____no_output_____
###Markdown
Discussion 1.1.3: Assess Missing Data in Each RowHere ive split the data into 2 parts one lower and higher.To deal with the rest of missing values I have two options:1) dropping all the missing values2) filling the missing values with the mode of each column Step 1.2: Select and Re-Encode FeaturesChecking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (`feat_info`) for a summary of types of measurement.- For numeric and interval data, these features can be kept without changes.- Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).- Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!
###Code
# How many features are there of each data type?
features = list(azdias.columns.values)
feat_info = feat_info[feat_info['attribute'].isin(features)]
feat_info['type'].value_counts()
###Output
_____no_output_____
###Markdown
Step 1.2.1: Re-Encode Categorical FeaturesFor categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:- For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything.- There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable.- For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section.
###Code
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?
categorical_variables = feat_info[feat_info['type'] == 'categorical']
categorical_variables
# separating the multi level and binary categorical variable
multi_level = []
binary = []
for feature in list(categorical_variables['attribute']):
if len(azdias[feature].value_counts()) > 2:
multi_level.append(feature)
else:
binary.append(feature)
multi_level
###Output
_____no_output_____
###Markdown
I will be droping the multilevel categorical variables.
###Code
#Dropping the multi level categorical variables
for column in multi_level:
azdias_low=azdias_low.drop(column, axis=1)
# Re-encoding all the binary categorical variable(s)(both numeric and non numeric) to be kept in the analysis.
azdias_low_dummies = pd.get_dummies(data=azdias_low, columns=binary, prefix=binary)
azdias_low_dummies.head()
###Output
_____no_output_____
###Markdown
Discussion 1.2.1: Re-Encode Categorical Features>Here I have dropped the multi level categorical variables and re-encoded all the other binary categorical variables(both numeric and non numeric). I am carrying out this analysis on the subset of the data with few or no missing values. Step 1.2.2: Engineer Mixed-Type FeaturesThere are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:- "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.- "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).- If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section.Be sure to check `Data_Dictionary.md` for the details needed to finish these tasks.
###Code
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
DECADE = []
value1 = list(azdias_low_dummies['PRAEGENDE_JUGENDJAHRE'])
for i in range (len(list(azdias_low_dummies['PRAEGENDE_JUGENDJAHRE']))):
value = value1[i]
if value in [1,2]:
DECADE.append(1)
elif value in [3,4]:
DECADE.append(2)
elif value in [5,6,7]:
DECADE.append(3)
elif value in [8,9]:
DECADE.append(4)
elif value in [10,11,12,13]:
DECADE.append(5)
elif value in [14,15]:
DECADE.append(6)
else:
DECADE.append(np.nan)
MOVEMENT =[]
for i in range (len(value1)):
value = value1[i]
if value in [2,4,6,7,9,11,13,15]:
MOVEMENT.append(1)
elif value in [1,3,5,8,10,12,14]:
MOVEMENT.append(2)
else:
MOVEMENT.append(np.nan)
#Adding the two new features and dropping 'PRAEGENDE_JUGENDJAHRE'
azdias_low_dummies["PRAEGENDE_JUGENDJAHRE_MOVEMENT"] = MOVEMENT
azdias_low_dummies["PRAEGENDE_JUGENDJAHRE_DECADE"] = DECADE
azdias_low_dummies=azdias_low_dummies.drop("PRAEGENDE_JUGENDJAHRE", axis=1)
# Investigate "CAMEO_INTL_2015" and engineer two new variables.
WEALTH = []
value2 = list(azdias_low_dummies['CAMEO_INTL_2015'])
for i in range (len(value2)):
value=value2[i]
if(type(value)!=float):
WEALTH.append(int(value[0]))
else:
WEALTH.append(np.nan)
LIFE_STAGE =[]
for i in range (len(value2)):
value=value2[i]
if(type(value)!=float):
LIFE_STAGE.append(int(value[1]))
else:
LIFE_STAGE.append(np.nan)
#Adding the two new features and dropping "CAMEO_INTL_2015"
azdias_low_dummies["CAMEO_INTL_2015_WEALTH"] = WEALTH
azdias_low_dummies["CAMEO_INTL_2015_LIFE_STAGE"] = LIFE_STAGE
azdias_low_dummies=azdias_low_dummies.drop("CAMEO_INTL_2015", axis=1)
azdias_low_dummies.head()
###Output
_____no_output_____
###Markdown
Discussion 1.2.2: Engineer Mixed-Type FeaturesI engineered 2 new features each for 'PRAEGENDE_JUGENDJAHRE' **and** "CAMEO_INTL_2015".>"PRAEGENDE_JUGENDJAHRE_MOVEMENT" : 1 for AVANTGARDE and 2 for MAINSTREAM>"PRAEGENDE_JUGENDJAHRE_DECADE" : 1 for 1940s, 2 for 1950s, 3 for 1960s, 4 for 1970s, 5 for 1980s and 6 for 1990s >"CAMEO_INTL_2015_WEALTH" : 1 for WEALTHY, 2 for PROSPEROUS, 3 for COMFORTABLE, 4 for LESS AFFLUENT and 5 for POORER>"CAMEO_INTL_2015_LIFE_STAGE" : 1 for Pre-Family Couples & Singles, 2 for Young Couples With Children, 3 for Families With School Age Children, 4 for Older Families & Mature Couples and 5 for Elders In Retirement Step 1.2.3: Complete Feature SelectionIn order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:- All numeric, interval, and ordinal type columns from the original dataset.- Binary categorical features (all numerically-encoded).- Engineered features from other multi-level categorical features and mixed features.Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from **the subset with few or no missing values**. Step 1.3: Create a Cleaning FunctionEven though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.
###Code
def clean_data(df):
"""
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
"""
# Put in code here to execute all main cleaning steps:
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
# convert missing value codes into NaNs, ...
values = ['-1','0','1','2','3','4','5','6','7','8','9']
for column in range(85):
missing_values = feat_info.iloc[column][3]
missing_values = missing_values.strip('[')
missing_values = missing_values.strip(']')
missing_values = missing_values.split(sep=',')
for i in range(len(missing_values)):
if missing_values[i] in values:
missing_values[i] = int(missing_values[i])
if(missing_values!=['']):
df = df.replace({feat_info.iloc[column][0]: missing_values}, np.nan)
# Perform an assessment of how much missing data there is in each column of the
# dataset.
missing_values_per_feature = df.isnull().sum()
missing_values_per_feature =missing_values_per_feature/ 891221
missing_values_per_feature=missing_values_per_feature.sort_values()*100
# Remove the outlier columns from the dataset. (You'll perform other data
# engineering tasks such as re-encoding and imputation later.)
outlier_column = []
for column in range(85):
current_column=df.columns.values[column]
percent_missing_current_column=missing_values_per_feature[column]
if percent_missing_current_column >18:
outlier_column.append(current_column)
# remove selected columns and rows, ...
df = df.drop(columns=outlier_column)
df_low = df[df.isnull().sum(axis=1) < 5]
# select, re-encode, and engineer column values.
categorical_variables = feat_info[feat_info['type'] == 'categorical']
multi_level = []
binary = []
for feature in list(categorical_variables['attribute']):
if len(df[feature].value_counts()) > 2:
multi_level.append(feature)
else:
binary.append(feature)
#Dropping the multi level categorical variables
for column in multi_level:
df_low=df_low.drop(column, axis=1)
# Re-encode categorical variable(s) to be kept in the analysis.
df_low_dummies = pd.get_dummies(data=df_low, columns=binary, prefix=binary)
df_low_dummies.head()
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
MOVEMENT = []
value1 = list(df_low_dummies['PRAEGENDE_JUGENDJAHRE'])
for i in range (len(list(df_low_dummies['PRAEGENDE_JUGENDJAHRE']))):
value = value1[i]
if value in [1,2]:
MOVEMENT.append(1)
elif value in [3,4]:
MOVEMENT.append(2)
elif value in [5,6,7]:
MOVEMENT.append(3)
elif value in [8,9]:
MOVEMENT.append(4)
elif value in [10,11,12,13]:
MOVEMENT.append(5)
elif value in [14,15]:
MOVEMENT.append(6)
else:
MOVEMENT.append(np.nan)
DECADE =[]
for i in range (len(value1)):
value = value1[i]
if value in [2,4,6,7,9,11,13,15]:
DECADE.append(0)
elif value in [1,3,5,8,10,12,14]:
DECADE.append(1)
else:
DECADE.append(np.nan)
df_low_dummies["PRAEGENDE_JUGENDJAHRE_MOVEMENT"] = MOVEMENT
df_low_dummies["PRAEGENDE_JUGENDJAHRE_DECADE"] = DECADE
df_low_dummies=df_low_dummies.drop("PRAEGENDE_JUGENDJAHRE", axis=1)
# Investigate "CAMEO_INTL_2015" and engineer two new variables.
WEALTH = []
value2 = list(df_low_dummies['CAMEO_INTL_2015'])
for i in range (len(value2)):
value=value2[i]
if(type(value)!=float):
WEALTH.append(int(value[0]))
else:
WEALTH.append(np.nan)
LIFE_STAGE =[]
for i in range (len(value2)):
value=value2[i]
if(type(value)!=float):
LIFE_STAGE.append(int(value[1]))
else:
LIFE_STAGE.append(np.nan)
df_low_dummies["CAMEO_INTL_2015_WEALTH"] = WEALTH
df_low_dummies["CAMEO_INTL_2015_LIFE_STAGE"] = LIFE_STAGE
df_low_dummies=df_low_dummies.drop("CAMEO_INTL_2015", axis=1)
# Return the cleaned dataframe.
return df_low_dummies
###Output
_____no_output_____
###Markdown
Step 2: Feature Transformation Step 2.1: Apply Feature ScalingBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the [API reference page for sklearn](http://scikit-learn.org/stable/modules/classes.html) to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:- sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an [Imputer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html) to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below.- For the actual scaling function, a [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) instance is suggested, scaling each feature to mean 0 and standard deviation 1.- For these classes, you can make use of the `.fit_transform()` method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.
###Code
#imputing missing values with the mode
azdias_low_dummies[list(azdias_low_dummies.columns)] = azdias_low_dummies[list(azdias_low_dummies.columns)].fillna(azdias_low_dummies.mode().iloc[0])
# Apply feature scaling to the general population demographics data.
scaler = StandardScaler()
azdias_low_dummies[list(azdias_low_dummies.columns)] = scaler.fit_transform(azdias_low_dummies[list(azdias_low_dummies.columns)])
azdias_low_dummies.head()
###Output
_____no_output_____
###Markdown
Discussion 2.1: Apply Feature ScalingI used mode values of respective column to replace the missing values, before applying feature scaling , Step 2.2: Perform Dimensionality ReductionOn your scaled data, you are now ready to apply dimensionality reduction techniques.- Use sklearn's [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability).- Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's [`plot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.- Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation.
###Code
# Apply PCA to the data.
pca = PCA()
azdias_low_pca = pca.fit_transform(azdias_low_dummies)
# Investigate the variance accounted for by each principal component.
def scree_plot(pca):
'''
Creates a scree plot associated with the principal components
INPUT: pca - the result of instantian of PCA in scikit learn
OUTPUT:
None
Credit:Udacity Nanodegree
'''
num_components = len(pca.explained_variance_ratio_)
ind = np.arange(num_components)
vals = pca.explained_variance_ratio_
plt.figure(figsize=(10, 6))
ax = plt.subplot(111)
cumvals = np.cumsum(vals)
ax.bar(ind, vals)
ax.plot(ind, cumvals)
for i in range(num_components):
if(i%5==0):
ax.annotate(r"%s%%" % ((str(vals[i]*100)[:4])), (ind[i]+0.2, vals[i]), va="bottom", ha="center", fontsize=12)
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlabel("Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
scree_plot(pca)
# Re-apply PCA to the data while selecting for number of components to retain.
pca = PCA(n_components=30)
azdias_low_pca = pca.fit_transform(azdias_low_dummies)
scree_plot(pca)
###Output
_____no_output_____
###Markdown
Discussion 2.2: Perform Dimensionality ReductionAfter performing PCA on my dataset,As variance per principal component is low after 30 I finally to keep the number of components as 30 Step 2.3: Interpret Principal ComponentsNow that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.- To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.- You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the *i*-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters.
###Code
#function for calculating weights of the components
def weights(nth_pc,pca,df):
name_list=list(df.columns)
weights_list=list(pca.components_[nth_pc])
df =pd.DataFrame(list(zip(name_list, weights_list)))
df.set_axis(['Feature', 'Weights'], axis=1, inplace=True)
df = df.sort_values(by=['Weights'] , ascending=False)
df.set_index('Feature', inplace=True)
return df
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
# HINT: Try defining a function here or in a new cell that you can reuse in the
# other cells.
print(weights(1,pca,azdias_low_dummies).head(2))
print(weights(1,pca,azdias_low_dummies).tail(2))
###Output
Weights
Feature
ALTERSKATEGORIE_GROB 0.258114
SEMIO_ERL 0.250297
Weights
Feature
SEMIO_KULT -0.243112
SEMIO_REL -0.274878
###Markdown
Principal 1¶The strongest positive feature weights: ALTERSKATEGORIE_GROB and SEMIO_ERL which implies their age is more than 60 years or uniformly distributed and they are less bent towards events.The strongest negative feature weights: SEMIO_KULT and SEMIO_REL which implies they are more cultural minded people and are less religious.
###Code
# Map weights for the second principal component to corresponding feature names
# and then print the linked values, sorted by weight.
print(weights(2,pca,azdias_low_dummies).head(2))
print(weights(2,pca,azdias_low_dummies).tail(2))
###Output
Weights
Feature
ANREDE_KZ_1 0.336667
SEMIO_VERT 0.316804
Weights
Feature
SEMIO_KAEM -0.301604
ANREDE_KZ_2 -0.336667
###Markdown
Principal 2The strongest positive feature weights: ANREDE_KZ_1 and SEMIO_VERT which implies they are male and are less dreamy.The strongest negative feature weights: SEMIO_KAEM and ANREDE_KZ_2 which implies they are pugnacious and are not female.
###Code
# Map weights for the third principal component to corresponding feature names
# and then print the linked values, sorted by weight.
print(weights(3,pca,azdias_low_dummies).head(2))
print(weights(3,pca,azdias_low_dummies).tail(2))
###Output
Weights
Feature
GREEN_AVANTGARDE_1 0.357257
EWDICHTE 0.240387
Weights
Feature
PRAEGENDE_JUGENDJAHRE_MOVEMENT -0.357257
GREEN_AVANTGARDE_0 -0.357257
###Markdown
Principal 3The strongest positive feature weights: GREEN_AVANTGARDE_1 and EWDICHTE which implies they are member of green avantgarde and they live in areas where density of households per square kilometer is more. The strongest negative feature weights: PRAEGENDE_JUGENDJAHRE_MOVEMENT and ANREDE_KZ_2 which implies they are not member of green mainstream and are not feamle. Discussion 2.3: Interpret Principal ComponentsI mapped each weight to their corresponding feature name and figured out the most interesting features for the first three principal components in this substep. Step 3: Clustering Step 3.1: Apply Clustering to General PopulationYou've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.- Use sklearn's [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans) class to perform k-means clustering on the PCA-transformed data.- Then, compute the average difference from each point to its assigned cluster's center. **Hint**: The KMeans object's `.score()` method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.- Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. **Warning**: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30.- Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3.
###Code
# Over a number of different cluster counts...
average=[]
for i in range(15):
if(i%2==0 and i!=0):
# run k-means clustering on the data and...
kmeans = KMeans(n_clusters=i).fit(azdias_low_dummies)
# Obtain a score related to the model fit
score = np.abs(kmeans.score(azdias_low_dummies))
# compute the average within-cluster distances.
print("For n_clusters={}, average within-cluster distances = {}".format(i,score/azdias_low_dummies.shape[0]))
average.append(score/azdias_low_dummies.shape[0])
# Investigate the change in within-cluster distance across number of clusters.
# HINT: Use matplotlib's plot function to visualize this relationship.
n=[2,4,6,8,10,12,14]
cluster_df =pd.DataFrame(list(zip(average,n)))
cluster_df.set_axis(['average', 'n'], axis=1, inplace=True)
cluster_df.plot.bar(x='n', y='average')
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
kmeans_pred_general = KMeans(n_clusters=14).fit(azdias_low_dummies)
###Output
_____no_output_____
###Markdown
Discussion 3.1: Apply Clustering to General PopulationI fitted the k means modelwith 14 clusters using sklearns on my PCA transformed data. Step 3.2: Apply All Steps to the Customer DataNow that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.- Don't forget when loading in the customers data, that it is semicolon (`;`) delimited.- Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the `clean_data()` function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.)- Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a `.fit()` or `.fit_transform()` method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.
###Code
# Load in the customer demographics data.
customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv',delimiter=';')
customers.shape
# Apply preprocessing, feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
customers_clean=clean_data(customers)
customers_clean.shape
#normalization using StandardScaler
customers_clean[customers_clean.columns] = normalizer.transform(customers_clean[customers_clean.columns].as_matrix())
#transform the customers data using pca object
customers_clean_pca = pca.transform(customers_clean)
#predict clustering using the kmeans object
predict_customers = model_general.predict(customers_clean_pca)
###Output
_____no_output_____
###Markdown
Step 3.3: Compare Customer Data to Demographics DataAt this point, I have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, I will compare the two cluster distributions to see where the strongest customer base for the company is.Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.Take a look at the following points in this step:- Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) or [`barplot()`](https://seaborn.pydata.org/generated/seaborn.barplot.html) function could be handy. - Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations!- Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the `.inverse_transform()` method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.- Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters?
###Code
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.
from collections import Counter
Counter(kmeans_pred_general)
Counter(kmeans_pred_customers)
labels, values = zip(*Counter(kmeans_pred_customers).items())
v=list(values)
v[:] = [x/len(kmeans_pred_customers) for x in v]
indexes = np.arange(len(labels))
width = .5
labels1, values1 = zip(*Counter(kmeans_pred_general).items())
v1=list(values1)
v1[:] = [x/len(kmeans_pred_general) for x in v1]
indexes1 = np.arange(len(labels))
plt.bar(indexes1, v1, width ,label='General')
plt.bar(indexes+width, v, width, color='r' , label='Customer')
plt.xticks(indexes + width * 0.5, labels)
plt.legend(loc='upper left')
plt.grid()
plt.show()
From the above plot, we can say that in 12,9,6,3,4 and 5 clusters, proportion of data is underpresented for customer compared to general population AND in 2,13,11,0 and 8 cluster, proportion of data is overpresented for customer compared to general population.
#Function for printing the top feature
def top_feature(cl_no,pca,df):
print("Top feature for n_clusters:{} is {} ".format(cl_no,weights(cl_no,pca,df).index[0]))
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
top_feature(2,pca1,customers_low_dummies)
top_feature(13,pca1,customers_low_dummies)
top_feature(11,pca1,customers_low_dummies)
top_feature(10,pca1,customers_low_dummies)
top_feature(8,pca1,customers_low_dummies)
###Output
Top feature for n_clusters:2 is SEMIO_REL
Top feature for n_clusters:13 is ANZ_HH_TITEL
Top feature for n_clusters:11 is VERS_TYP_2.0
Top feature for n_clusters:10 is ANZ_HAUSHALTE_AKTIV
Top feature for n_clusters:8 is SOHO_KZ_1.0
###Markdown
>SEMIO_REL implies they are religious.>ANZ_HH_TITEL implies they have residential building>VERS_TYP_2.0 implies they have high individualistic-accepting risks>ANZ_HAUSHALTE_AKTIV implies that they live in a building with higher Number of households >SOHO_KZ_1.0 implies small office/home office
###Code
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?
top_feature(12,pca1,customers_low_dummies)
top_feature(9,pca1,customers_low_dummies)
top_feature(6,pca1,customers_low_dummies)
top_feature(3,pca1,customers_low_dummies)
top_feature(4,pca1,customers_low_dummies)
top_feature(5,pca1,customers_low_dummies)
###Output
Top feature for n_clusters:12 is ANZ_HH_TITEL
Top feature for n_clusters:9 is OST_WEST_KZ_W
Top feature for n_clusters:6 is KBA13_ANZAHL_PKW
Top feature for n_clusters:3 is GREEN_AVANTGARDE_1
Top feature for n_clusters:4 is LP_LEBENSPHASE_GROB
Top feature for n_clusters:5 is VERS_TYP_1.0
|
PlanarSLAMExample_lbp.ipynb | ###Markdown
Loopy Belief PropagationRecently, **loopy belief propagation** or LBP has become trendy again because it might a good fit for highly parallel, distributed processing in chips like GraphCore.In this example we re-create the planar SLAM example, and show how LBP can be viewed as repeated elimination in the factor graph, and how it can be seen as the "dual" of Gibbs sampling, which adopts dual elimination strategy.**Important note Apr 30 2022: this still contains a bug in LBP**
###Code
%pip -q install gtbook # also installs latest gtsam pre-release
import math
from math import pi, sqrt
import matplotlib.pyplot as plt
import numpy as np
from collections import defaultdict
import plotly.express as px
try:
import google.colab
except:
import plotly.io as pio
pio.renderers.default = "png"
import gtsam
import gtsam.utils.plot as gtsam_plot
from gtbook.display import show
from gtbook.gaussian import sample_bayes_net, sample_conditional
from gtbook.driving import planar_example, marginals_figure
from gtsam import Point2, Pose2, Rot2, noiseModel
###Output
_____no_output_____
###Markdown
Setting up a non-linear SLAM ExampleBelow we re-create a similar factor graph as in `PlanarSLAMExample`:
###Code
graph, truth, keys = planar_example()
x1, x2, x3, l1, l2 = keys
show(graph, truth, binary_edges=True)
###Output
_____no_output_____
###Markdown
As always, we can calculate and plot covariance ellipses which show the Laplace approximation graphically.
###Code
marginals = gtsam.Marginals(graph, truth)
marginals_figure(truth, marginals, keys)
###Output
_____no_output_____
###Markdown
Loopy Belief PropagationWe initialize a set of individual *Gaussian* factors $q(x_j)$ or *beliefs*, one for each variable. LBP is a fixed point algorithm to minimize the KL $D_\text{KL}(p||q)$ divergence between the true posterior $p(X|Z)$ and the variational approximation$$q(X) = \prod_j q(x_j)$$We repeatedly:- pick a variable $x_j$ at random;- consider the Markov blanket of $x_j$, the factor graph fragment $\phi(x_j, X_j)$ where $X_j$ is the separator;- augment the factor graph fragment with beliefs on all $x_k\in X_j$, *except* $q(x_j)$;- eliminate the separator $X_j$ by factorizing $\phi(x_j, X_j) = p(X_j|x_j)q'(x_j)$;- assign $q(x_j) \leftarrow q'(x_j)$ to be the new belief on $x_j$.We first cache all Markov blankets:
###Code
markov_blankets = defaultdict(gtsam.NonlinearFactorGraph)
for i in range(graph.size()):
factor = graph.at(i)
for j in factor.keys():
markov_blankets[j].add(factor)
###Output
_____no_output_____
###Markdown
Here are the Markov blankets for $l_2$ (simple) and $x_2$ (complex):
###Code
show(markov_blankets[l2], truth, binary_edges=True)
show(markov_blankets[x2], truth, binary_edges=True)
###Output
_____no_output_____
###Markdown
We initialize the beliefs $q_j(x_j)$ on the manifold, which we do in *information form* by using a `HessianFactor`, whose constructor reads as follows:```c++/** Construct a unary factor. G is the quadratic term (Hessian matrix), g* the linear term (a vector), and f the constant term. The quadratic* error is:* 0.5*(f - 2*x'*g + x'*G*x)*/HessianFactor(Key j, const Matrix& G, const Vector& g, double f);```We will initialize with an identity Hessian/information matrix $G$. The entire LPB code is then:
###Code
def lbp(x0:gtsam.Values, hook=None, N=100):
"""Perform loopy belief propagation with initial estimate x."""
x = gtsam.Values(x0)
error = graph.error(x)
q = {key: gtsam.HessianFactor(key, G=np.eye(n), g=5*np.zeros(
(n, 1)), f=0) for key, n in zip(keys, [3, 3, 3, 2, 2])}
hook(0, None, x, q, error)
def update(j:int, x:gtsam.Values):
# Get linearized Gaussian Markov blanket and augment with beliefs
augmented_graph = markov_blankets[j].linearize(x)
augmented_keys = augmented_graph.keys()
for k in keys:
if k != j and augmented_keys.count(k):
augmented_graph.add(q[k])
try:
# Eliminate with x_j eliminated last:
ordering = gtsam.Ordering.ColamdConstrainedLastGaussianFactorGraph(
augmented_graph, [j])
gbn = augmented_graph.eliminateSequential(ordering)
q_prime = gbn.at(gbn.size()-1)
# move on the manifold
delta = q_prime.solve(gtsam.VectorValues())
new_x = x.retract(delta)
n = len(delta.at(j))
P = np.linalg.inv(q_prime.information())
q[j] = gtsam.HessianFactor(j, np.zeros((n,)), P)
return new_x
except:
return None
for it in range(1, N):
# choose a variable whose belief to update
j = keys[rng.choice(5)]
new_x = update(j, x)
if new_x is not None:
error = graph.error(new_x)
hook(it, j, x, q, error)
x = new_x
if error < 1e-1:
break
return x, q
###Output
_____no_output_____
###Markdown
We then initialize with either the ground truth or some random values:
###Code
rng = np.random.default_rng(42)
if False:
initial = gtsam.Values(truth)
else:
initial = gtsam.Values()
initial.insert(x1, Pose2(2, 1, 0).retract(0.1*rng.random((3,))))
initial.insert(x2, Pose2(2, 1, 0).retract(0.1*rng.random((3,))))
initial.insert(x3, Pose2(2, 1, 0).retract(0.1*rng.random((3,))))
initial.insert(l1, Point2(2, 1)+rng.random((2,)))
initial.insert(l2, Point2(2, 1)+rng.random((2,)))
def print_hook(it, j, x, q, error):
if it==0:
print(f"{it=}, initial error is {error}")
else:
print(f"{it=}, updated {gtsam.DefaultKeyFormatter(j)}, error now {error}")
rng = np.random.default_rng(42)
x, q = lbp(initial, print_hook)
# plot final state
for j in keys[:3]:
P = np.linalg.inv(q[j].information())
gtsam_plot.plot_point2(0, x.atPose2(j).translation(), 'b', P)
for j in keys[3:]:
P = np.linalg.inv(q[j].information())
gtsam_plot.plot_point2(0, x.atPoint2(j), 'r', P)
def plot_ellipse(j, x, q):
P = np.linalg.inv(q[j].information())
if j in {x1, x2, x3}:
gtsam_plot.plot_point2(0, x.atPose2(j).translation(), 'b', P)
else:
gtsam_plot.plot_point2(0, x.atPoint2(j), 'r', P)
def ellipse_hook(it, j, x, q, error):
if it==0:
for k in keys: plot_ellipse(k, x, q)
else:
plot_ellipse(j, x, q)
rng = np.random.default_rng(42)
marginals_figure(truth, marginals, keys)
x = gtsam.Values(initial)
x, q = lbp(x, ellipse_hook)
def show_frame(it, j, x, q, error):
if it % 5 != 0: return
for k in keys:
P = np.linalg.inv(q[k].information())
if k in [x1,x2,x3]:
gtsam_plot.plot_point2(0, x.atPose2(k).translation(), 'b', P)
else:
gtsam_plot.plot_point2(0, x.atPoint2(k), 'r', P)
plt.axis('equal')
plt.xlim([-0.8, 6])
plt.ylim([-0.8, 3])
plt.show()
rng = np.random.default_rng(42)
x, q = lbp(gtsam.Values(initial), show_frame)
###Output
_____no_output_____
###Markdown
Gibbs SamplingGibbs sampling is a variant of Markov Chain Monte Carlo sampling that always accepts any proposal.We repeatedly:- pick a variable $x_j$ at random;- consider the Markov blanket of $x_j$, the factor graph fragment $\phi(x_j, X_j)$ where $X_j$ is the separator;- eliminate the variable $x_j$ by factorizing $\phi(x_j, X_j) = p(x_j|X_j)\phi(X_j)$;- sample $x_j$ $\phi(x_j, X_j)$.We will need:
###Code
rng = np.random.default_rng(42)
def plot_sample(manifold_sample, alpha=0.1):
points = np.empty((2, 5))
for i in [1, 2, 3]:
points[:, i -
1] = manifold_sample.atPose2(gtsam.symbol('x', i)).translation()
for j in [1, 2]:
points[:, j+2] = manifold_sample.atPoint2(gtsam.symbol('l', j))
plt.plot(points[0], points[1], 'k.', markersize=5, alpha=alpha)
return points
def proposal(x, j):
"""Propose via Gibbs sampling"""
# Get linearized Gaussian Markov blanket
local_graph = markov_blankets[j].linearize(x)
# Eliminate just x_j:
ordering = gtsam.Ordering()
ordering.push_back(j)
try: # eliminate, might fail if singular
gbn, _ = local_graph.eliminatePartialSequential(ordering)
# sample x_j and propose a new manifold sample
conditional = gbn.at(0)
vvj = gtsam.VectorValues()
vvj.insert(j, sample_conditional(conditional, 1))
return x.retract(vvj)
except:
return x
# Start with MAP estimate
y = gtsam.Values(truth)
N = 10000
marginals_figure(truth, marginals, keys)
for it in range(N):
# choose a variable to perturb
j = keys[rng.choice(5)]
y = proposal(y, j)
if it > N//2:
plot_sample(y, alpha=0.01)
###Output
_____no_output_____
###Markdown
Corrected Gibbs SamplingAbove we pretended elimination yielded the correct conditional on $x_j$ given its separator. Unfortunately, calculating the conditional probability $P(x_j|X_j)$ exactly is again a hard problem, and sampling from it can be expensive as well. We correct this by rejection a fraction of the proposals, Metropolis style: Then we run Metropolis sampler where the proposal randomly picks a variable to perturb, and then uses the local factor graph to calculate the acceptance ratio.
###Code
rng = np.random.default_rng(42)
def accept(log_a):
"""calculate acceptance, with some care to avoid overflow."""
if log_a >= 0:
return True
if log_a < -10:
return False
return rng.uniform() < math.exp(log_a)
# Start with MAP estimate
z = gtsam.Values(truth)
N = 15000
marginals_figure(truth, marginals, keys)
nr_accepted = 0
for it in range(N):
# choose a variable to perturb
j = keys[rng.choice(5)]
p = proposal(z, j)
# calculate local acceptance ratio
log_a = markov_blankets[j].error(z) - markov_blankets[j].error(p)
if accept(log_a):
nr_accepted += 1
z = p
if it>N//2:
plot_sample(z, alpha=0.01)
print(f"nr_accepted={nr_accepted}")
###Output
nr_accepted=8536
|
notebook/Localisation_experiments/Loc Code INSEE.ipynb | ###Markdown
Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
#Sklearn
from sklearn.decomposition import NMF, LatentDirichletAllocation, TruncatedSVD
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.manifold import TSNE
#other
import concurrent.futures
import time
import pyLDAvis.sklearn
from pylab import bone, pcolor, colorbar, plot, show, rcParams, savefig
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# Plotly
from plotly import tools
import chart_studio.plotly as py
from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.figure_factory as ff
#spaCy
import spacy
nlp = spacy.load("fr_core_news_lg")
from spacy.lang.fr.stop_words import STOP_WORDS
from spacy.lang.fr import French
import string
punctuations = string.punctuation
stopwords = list(STOP_WORDS)
###Output
_____no_output_____
###Markdown
Load (acceptably clean) data
###Code
data=pd.read_pickle("../../data/cleaned2.pkl")
data_np=data.to_numpy()
data=np.load('../../data/organizations_col.npy', allow_pickle=True)
data_single=list(dict.fromkeys(data))
data_single=[x for x in data_single if str(x) != 'nan']
len(data_single)
i=random.randint(1, 1161)
data_single[i]
import progressbar
bar = progressbar.ProgressBar(maxval=len(data_np), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
###Output
_____no_output_____
###Markdown
Process data to extract keywords only Extract the locs
###Code
def extract_locs(doc):
all_locs=[None]*len(doc)
bar.start()
for i in range(len(doc)):
this_doc=[]
for token in nlp(doc[i]):
if token.ent_type_=='LOC':
this_doc.append(token.text)
all_locs[i]=this_doc
bar.update(i+1)
bar.finish()
return(all_locs)
import unicodedata
def strip_accents(text):
try:
text = unicode(text, 'utf-8')
except NameError: # unicode is a default on python 3
pass
text = unicodedata.normalize('NFD', text)\
.encode('ascii', 'ignore')\
.decode("utf-8")
return str(text)
all_locs=extract_locs(data_single)
all_locs
#np.save('../../data/locs_org_alg0.npy',np.array(all_locs), fix_imports=True, allow_pickle=True)
stop_loc=['Région', 'Métropole', 'Region', 'Metropole','Mer', 'mer', 'Département', 'DEPARTEMENT', 'Agglomération', 'agglomération','Communauté', 'communauté']
all_locs_ns=[None]*len(all_locs)
for i in range(len(all_locs)):
mini_list=[x for x in all_locs[i] if x not in stop_loc]
all_locs_ns[i]=mini_list
all_locs_ns_str=all_locs_ns
for i in range(len(all_locs_ns)):
all_locs_ns_str[i]=' '.join(all_locs_ns[i])
len(all_locs_ns_str)
all_locs_np=np.array(all_locs)
np.save('../../data/locs_org_alg0.npy',np.array(all_locs_ns_str), fix_imports=True, allow_pickle=True)
#np.save('../../data/locs_insee_str.npy', all_locs_ns_str, allow_pickle=True)
###Output
_____no_output_____
###Markdown
Compare with insee tables
###Code
all_locs_ns_str=np.load('../../data/locs_insee_str.npy', allow_pickle=True)
all_locs_ns_str[:10]
departements=pd.read_csv('../../data/departement2019.csv')
communes=pd.read_csv('../../data/communes-01012019.csv')
regions=pd.read_csv('../../data/region2019.csv')
communes_names=communes['nccenr'].to_numpy()
departements_names=departements['nccenr'].to_numpy()
regions_names=regions['nccenr'].to_numpy()
not_comm_names=list(departements_names)+list(regions_names)##+['France', 'Europe', 'Monde']
communes_prblm = list(np.load('../../data/communes_prblm.npy', allow_pickle=True))
np.save('../../data/communes_prblm.npy', np.array(communes_prblm, dtype=object), allow_pickle=True)
communes_prblm=[]
for i in communes_names:
for j in not_comm_names:
if fuzz.token_set_ratio(i,j)==100:
communes_prblm.append(i)
all_locs_ns_str[25]
regions_names
from fuzzywuzzy import fuzz
def is_in(locs, category):
values=[]
for i in category:
values.append(fuzz.token_set_ratio(locs, i))
maxi=max(values)
max_item=[i for i, j in enumerate(values) if j == maxi]
# print(max_item)
if values[max_item[0]]==100:
found=True
if len(max_item)>1:
values2=[]
for w in max_item:
values2.append(fuzz.ratio(locs, category[w]))
# print(values2)
max_item_item=values2.index(max(values2))
max_item=[max_item[max_item_item]]
else:
found=False
return(max_item[0], found)
def text_to_loc(text):
if text=='':
return(None)
if (text.find(' france')!=-1) or (text.find(' France')!=-1):
return('France')
max_item, found_c=is_in(text, communes_names)
location=communes.loc[[max_item]]
if communes_names[max_item] in communes_prblm:
found_c=False
if found_c==False:
max_item, found_d=is_in(text, not_comm_names)
try:
location=departements.loc[[max_item]]
except:
location=regions.loc[[max_item-len(departements_names)]]
return(location)
return(location)
location=text_to_loc('Paris île-de-cité france.')
location
'île de france'.find(' france')
i=random.randint(1,len(all_locs_ns_str))
#i=27944
print('TEXTE INITIAL: '+ data_np[i])
t = time.time()
location=text_to_loc(all_locs_ns_str[i])
elapsed = time.time() - t
print('MOTS EXTRAITS'+ all_locs_ns_str[i])
print(' ')
print('Localisation INSEE:')
print(location)
print('Computed in '+str(elapsed) )
###Output
TEXTE INITIAL: tronçon de voie - saint-étienne-de-baïgorry. Saint-Étienne-de-Baïgorry . les voies sont représentées par leur axes et découpées chaque intersection. l’écriture des toponymes d’origine basque été validée par euskaltzaindia académie de la langue basque la mise jour est effectuée après délibération du conseil municipal.. adresse gps localisation voirie
MOTS EXTRAITSde la Charente Maritime
Localisation INSEE:
dep reg cheflieu tncc ncc nccenr \
16 17 75 17300 3 CHARENTE MARITIME Charente-Maritime
libelle
16 Charente-Maritime
Computed in 6.5447328090667725
###Markdown
Essais avec une bonne precision donc plus de contrainte
###Code
from fuzzywuzzy import fuzz
def is_in(locs, category):
values=[]
for i in category:
values.append(fuzz.partial_ratio(locs, i))
maxi=max(values)
max_item=[i for i, j in enumerate(values) if j == maxi]
# print(max_item)
if values[max_item[0]]==100:
found=True
if len(max_item)>1:
values2=[]
for w in max_item:
values2.append(fuzz.ratio(locs, category[w]))
# print(values2)
max_item_item=values2.index(max(values2))
max_item=[max_item[max_item_item]]
else:
found=False
return(max_item[0], found)
def text_to_loc(text):
if text=='':
return(None)
if (text.find(' france')!=-1) or (text.find(' France')!=-1):
return('France')
max_item, found_c=is_in(text, communes_names)
location=communes.loc[[max_item]]
if communes_names[max_item] in communes_prblm:
found_c=False
if found_c==False:
max_item, found_d=is_in(text, not_comm_names)
try:
location=departements.loc[[max_item]]
except:
location=regions.loc[[max_item-len(departements_names)]]
return(location)
return(location)
i=random.randint(1,len(all_locs_ns_str))
#i=27944
print('TEXTE INITIAL: '+ data_single[i])
t = time.time()
location=text_to_loc(data_single[i])
elapsed = time.time() - t
print('MOTS EXTRAITS'+ data_single[i])
print(' ')
print('Localisation INSEE:')
print(location)
print('Computed in '+str(elapsed) )
from tqdm import tqdm
locations_orgs=[]
for i in tqdm(data_single):
locations_orgs.append(text_to_loc(i))
np.save('../../data/locs_org_alg1.npy',np.array(locations_orgs), fix_imports=True, allow_pickle=True)
locations_orgs
#39298 huat de france et français
all_locs_ns_str[20751]
communes_names[31343]#8183
other=list(regions_names)+list(departements_names)+['France']
communes
dpt_prblm=[]
for i in departements_names:
for j in regions_names:
if fuzz.token_set_ratio(i,j)==100:
dpt_prblm.append(i)
for i in communes_names:
if fuzz.token_set_ratio(i,'drôme')==100:
print(i)
regions_names
departements_names
len(communes_prblm)
data_np[3422]
###Output
_____no_output_____
###Markdown
Pandas concatenation and stuff
###Code
regions.loc[[10]]
departements.loc[[3]]
communes.loc[[7563]]
result=pd.concat([regions.loc[[10]],departements.loc[[3]], communes.loc[[7563]]], ignore_index=True, sort=False)
communes.loc[[7563]].append(departements.loc[3], ignore_index=True, sort=False)
df=pd.DataFrame({'ncc':['France']})
df
df.append(communes.loc[[7563]])
###Output
_____no_output_____
###Markdown
Essais avec subwords
###Code
resultats=[]
for loc in all_locs_ns_str:
for ref in list(communes_names)+list(departements_names)+list(regions_names):
if ref in loc.split():
resultats.append((loc,ref))
resultats[:100]
###Output
_____no_output_____ |
dev/02_data_transforms.ipynb | ###Markdown
Transforms Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> Tuple[float,float]: return x
test_eq(anno_ret(f), Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
###Output
_____no_output_____
###Markdown
Types `TensorImage`, `TensorImageBW` and `TensorMask` are subclasses of `torch.Tensor` that know how to show themselves.
###Code
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
###Output
_____no_output_____
###Markdown
TypeDispatch - The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).
###Code
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst is not None: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): self.foo='a'
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
a.f(False)
test_eq(a.foo, 'a')
###Output
_____no_output_____
###Markdown
Transform -
###Code
#export
_tfm_methods = 'encodes','decodes','setups'
class _TfmDict(dict):
def __setitem__(self,k,v):
if k not in _tfm_methods or not isinstance(v,Callable): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
for nm in _tfm_methods:
if not hasattr(cls,nm): setattr(cls, nm, TypeDispatch())
if isinstance(f,Callable) and n in _tfm_methods:
getattr(cls,n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`,`setup`) to (`encodes`,`decodes`,`setups`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes,self.__class__.setups)
self.encodes,self.decodes,self.setups = TypeDispatch(),TypeDispatch(),TypeDispatch()
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def setup(self, items=None): return self.setups(items)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform", setup="Delegate to `setups` to set up transform")
show_doc(Transform)
###Output
_____no_output_____
###Markdown
A `Transform` is the main building block of the fastai data pipelines. In the most general terms a transform can be any function you want to apply to your data, however the `Transform` class provides several mechanisms that make the process of building them easy and flexible. The main `Transform` features:- **Type dispatch** - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch. - **Handling of tuples** - When a tuple (or another collection satisfying `is_listy`) of data is passed to a transform it will get applied to each element separately. Most comonly it will be a *(x,y)* tuple, but it can be anything for example a list of images. You can opt out of this behavior by setting the flag `as_item=True`. For transforms that must always operate on the tuple level you can set `as_item_force=True` which takes precedence over `as_item`, an example of that is `PointScaler`.- **Reversability** - A transform can be made reversible by implementing the `decodes` method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes.- **Type propagation** - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like `TensorImage` which is a thin wrapper of pytorches `Tensor`. You can opt out of this behavior by adding `->None` return type annotation.- **Preprocessing** - The `setup` method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data.- **Filtering based on the dataset type** - By setting the `filt` flag you can make the transform be used only in a specific `DataSource` subset like in training, but not validation.- **Ordering** - You can set the `order` attribute wchich the `Pipeline` uses when it needs to merge two lists of transforms.- **Appending new behavior with decorators** - You can easily extend an existing `Transform` by creating `encodes` or `decodes` methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform. Defining a `Transform`There are few ways to create a transform with different ratios of simplicity to flexibility.- **Extending the `Transform` class** - Use inheritence to implement the methods you want- **Passing methods to the constructor** - Instatiate the `Transform` class and pass your functions as `enc` and `dec` arguments.- **@Transform decorator** - Turn any function into a `Transform` by just adding a decorator, very straightforward if all you need is a single `encodes` implementation.- **Passing a function to fastai APIs** - Same as above, but when passing a function to other transform aware classes like `Pipeline` or `TfmdDS` you don't even need a decorator. Your function will get converted to a `Transform` automatically.
###Code
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
`Transform` can be used as a decorator, to turn a function into a `Transform`.
###Code
f = Transform(lambda o:o//2)
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
You can derive from `Transform` and use `encodes` for your encoding function.
###Code
class A(Transform):
def encodes(self, x:TensorImage): return -x
def decodes(self, x:TensorImage): return x+1
def setups (self, x:TensorImage): x.foo = 'a'
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
test_eq(f.decode(t), -im_t+1)
test_eq(f.decode(1), 1)
f.setup(im_t)
test_eq(im_t.foo, 'a')
t2 = tensor(1)
f.setup(t2)
assert not hasattr(f2,'foo')
f
###Output
_____no_output_____
###Markdown
Without return annotation we get an `Int` back since that's what was passed.
###Code
class A(Transform): pass
@A
def encodes(self, x:Int): return x//2
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
###Output
_____no_output_____
###Markdown
Without return annotation we don't cast if we're not a subclass of the input type.
###Code
class A(Transform):
def encodes(self, x:Int): return x/2
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
###Output
_____no_output_____
###Markdown
With return annotation `None` we get back whatever Python creates usually.
###Code
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
###Output
_____no_output_____
###Markdown
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
###Code
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
###Output
_____no_output_____
###Markdown
If the transform has `filt` then it's only applied if `filt` param matches.
###Code
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
###Output
_____no_output_____
###Markdown
If `as_item=True` the transform takes tuples as a whole and is applied to them.
###Code
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
###Output
_____no_output_____
###Markdown
If `as_item=False` the transform is applied to each element of a listy input.
###Code
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
###Output
_____no_output_____
###Markdown
TupleTransform
###Code
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def encodes(self, x:int): return x+1
@B
def encodes(self, x:str): return x+'1'
@B
def encodes(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
###Output
_____no_output_____
###Markdown
Non-type-constrained functions are applied to all elements of a tuple.
###Code
class A(TupleTransform): pass
@A
def encodes(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
###Output
_____no_output_____
###Markdown
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
###Code
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
###Output
_____no_output_____
###Markdown
The same behavior also works with `typing` module type classes.
###Code
class A(Transform): pass
@A
def encodes(self, x:numbers.Integral): return x+1
@A
def encodes(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
###Output
_____no_output_____
###Markdown
Transform accepts lists
###Code
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
Converted tmp.ipynb.
###Markdown
Transforms Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> Tuple[float,float]: return x
test_eq(anno_ret(f), Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
###Output
_____no_output_____
###Markdown
Types
###Code
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
###Output
_____no_output_____
###Markdown
TypeDispatch -
###Code
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): return x
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
###Output
_____no_output_____
###Markdown
Transform -
###Code
#export
class _TfmDict(dict):
def __setitem__(self,k,v):
if k=='_': k='encodes'
if k not in ('encodes','decodes') or not isinstance(v,Callable): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
if not hasattr(cls,'encodes'): cls.encodes=TypeDispatch()
if not hasattr(cls,'decodes'): cls.decodes=TypeDispatch()
if isinstance(f,Callable) and n in ('decodes','encodes','_'):
getattr(cls,'encodes' if n=='_' else n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`) to (`encodes`,`decodes`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes)
self.encodes,self.decodes = (TypeDispatch(),TypeDispatch())
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def setup(self, items=None): return getattr(self,'setups',noop)(items)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform", setup="Delegate to `setups` to set up transform")
show_doc(Transform)
###Output
_____no_output_____
###Markdown
Base class that delegates `__call__` and `decode` to `encodes` and `decodes`, doing nothing if param annotation doesn't match type. If called with listy `x` then it calls function with each item (unless `whole_typle`, in which case it's passed directly as a whole). The function (if matching 1st param type) will cast the result to the same as the input type, unless there's a return annotation (in which case it's cast to that), or the return annotation is `None` (in which case no casting is done).Details: `Transform` is a base class where you override encodes and/or decodes. e.g. `__call__` uses `call` which looks up what to call using `func`. If `whole_tuple` is set, that just returns `encodes` (or `decodes` if not `is_enc`). Otherwise we find the first annotated param with `_p1_anno` and check if `x` is an instance of that (if not `is_listy(x)`). If it is, we return the function (encodes/decodes), otherwise None. `call` then passes on to `_do_call` which does nothing if function is `None`. If `x` is listy, then we return a *list* of {functions or `None`}, and a list of results from `_do_call` for each function is returned.
###Code
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
`Transform` can be used as a decorator, to turn a function into a `Transform`.
###Code
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
You can derive from `Transform` and use either `_` or `encodes` for your encoding function.
###Code
class A(Transform):
def _(self, x:TensorImage): return -x
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
f
###Output
_____no_output_____
###Markdown
Without return annotation we get an `Int` back since that's what was passed.
###Code
class A(Transform): pass
@A
def _(self, x:Int): return x//2 # `_` is an abbreviation for `encodes`
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
###Output
_____no_output_____
###Markdown
Without return annotation we don't cast if we're not a subclass of the input type.
###Code
class A(Transform):
def encodes(self, x:Int): return x/2
def _(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
###Output
_____no_output_____
###Markdown
With return annotation `None` we get back whatever Python creates usually.
###Code
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
###Output
_____no_output_____
###Markdown
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
###Code
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
###Output
_____no_output_____
###Markdown
If the transform has `filt` then it's only applied if `filt` param matches.
###Code
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
###Output
_____no_output_____
###Markdown
TupleTransform
###Code
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def _(self, x:int): return x+1
@B
def _(self, x:str): return x+'1'
@B
def _(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
###Output
_____no_output_____
###Markdown
Non-type-constrained functions are applied to all elements of a tuple.
###Code
class A(TupleTransform): pass
@A
def _(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
###Output
_____no_output_____
###Markdown
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
###Code
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
###Output
_____no_output_____
###Markdown
The same behavior also works with `typing` module type classes.
###Code
class A(Transform): pass
@A
def _(self, x:numbers.Integral): return x+1
@A
def _(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
###Output
_____no_output_____
###Markdown
Transform accepts lists
###Code
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
Converted tmp.ipynb.
###Markdown
Transforms Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> Tuple[float,float]: return x
test_eq(anno_ret(f), Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
###Output
_____no_output_____
###Markdown
Types
###Code
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
###Output
_____no_output_____
###Markdown
TypeDispatch -
###Code
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): return x
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
###Output
_____no_output_____
###Markdown
Transform -
###Code
#export
class _TfmDict(dict):
def __setitem__(self,k,v):
if k=='_': k='encodes'
if k not in ('encodes','decodes') or not isinstance(v,Callable): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
if not hasattr(cls,'encodes'): cls.encodes=TypeDispatch()
if not hasattr(cls,'decodes'): cls.decodes=TypeDispatch()
if isinstance(f,Callable) and n in ('decodes','encodes','_'):
getattr(cls,'encodes' if n=='_' else n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`) to (`encodes`,`decodes`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes)
self.encodes,self.decodes = (TypeDispatch(),TypeDispatch())
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform")
show_doc(Transform)
###Output
_____no_output_____
###Markdown
Base class that delegates `__call__` and `decode` to `encodes` and `decodes`, doing nothing if param annotation doesn't match type. If called with listy `x` then it calls function with each item (unless `whole_typle`, in which case it's passed directly as a whole). The function (if matching 1st param type) will cast the result to the same as the input type, unless there's a return annotation (in which case it's cast to that), or the return annotation is `None` (in which case no casting is done).Details: `Transform` is a base class where you override encodes and/or decodes. e.g. `__call__` uses `call` which looks up what to call using `func`. If `whole_tuple` is set, that just returns `encodes` (or `decodes` if not `is_enc`). Otherwise we find the first annotated param with `_p1_anno` and check if `x` is an instance of that (if not `is_listy(x)`). If it is, we return the function (encodes/decodes), otherwise None. `call` then passes on to `_do_call` which does nothing if function is `None`. If `x` is listy, then we return a *list* of {functions or `None`}, and a list of results from `_do_call` for each function is returned.
###Code
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
`Transform` can be used as a decorator, to turn a function into a `Transform`.
###Code
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
You can derive from `Transform` and use either `_` or `encodes` for your encoding function.
###Code
class A(Transform):
def _(self, x:TensorImage): return -x
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
f
###Output
_____no_output_____
###Markdown
Without return annotation we get an `Int` back since that's what was passed.
###Code
class A(Transform): pass
@A
def _(self, x:Int): return x//2 # `_` is an abbreviation for `encodes`
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
###Output
_____no_output_____
###Markdown
Without return annotation we don't cast if we're not a subclass of the input type.
###Code
class A(Transform):
def encodes(self, x:Int): return x/2
def _(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
###Output
_____no_output_____
###Markdown
With return annotation `None` we get back whatever Python creates usually.
###Code
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
###Output
_____no_output_____
###Markdown
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
###Code
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
###Output
_____no_output_____
###Markdown
If the transform has `filt` then it's only applied if `filt` param matches.
###Code
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
###Output
_____no_output_____
###Markdown
TupleTransform
###Code
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase((1.,))), TupleBase((Int(1),)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def _(self, x:int): return x+1
@B
def _(self, x:str): return x+'1'
@B
def _(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
###Output
_____no_output_____
###Markdown
Non-type-constrained functions are applied to all elements of a tuple.
###Code
class A(TupleTransform): pass
@A
def _(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
###Output
_____no_output_____
###Markdown
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
###Code
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
###Output
_____no_output_____
###Markdown
The same behavior also works with `typing` module type classes.
###Code
class A(Transform): pass
@A
def _(self, x:numbers.Integral): return x+1
@A
def _(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
###Output
_____no_output_____
###Markdown
Transform accepts lists
###Code
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
Converted tmp.ipynb.
###Markdown
Transforms Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> Tuple[float,float]: return x
test_eq(anno_ret(f), Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
###Output
_____no_output_____
###Markdown
Types
###Code
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
###Output
_____no_output_____
###Markdown
TypeDispatch -
###Code
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): return x
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
###Output
_____no_output_____
###Markdown
Transform -
###Code
#export
class _TfmDict(dict):
def __setitem__(self,k,v):
if k=='_': k='encodes'
if k not in ('encodes','decodes') or not isinstance(v,Callable): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
if not hasattr(cls,'encodes'): cls.encodes=TypeDispatch()
if not hasattr(cls,'decodes'): cls.decodes=TypeDispatch()
if isinstance(f,Callable) and n in ('decodes','encodes','_'):
getattr(cls,'encodes' if n=='_' else n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`) to (`encodes`,`decodes`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes)
self.encodes,self.decodes = (TypeDispatch(),TypeDispatch())
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform")
show_doc(Transform)
###Output
_____no_output_____
###Markdown
Base class that delegates `__call__` and `decode` to `encodes` and `decodes`, doing nothing if param annotation doesn't match type. If called with listy `x` then it calls function with each item (unless `whole_typle`, in which case it's passed directly as a whole). The function (if matching 1st param type) will cast the result to the same as the input type, unless there's a return annotation (in which case it's cast to that), or the return annotation is `None` (in which case no casting is done).Details: `Transform` is a base class where you override encodes and/or decodes. e.g. `__call__` uses `call` which looks up what to call using `func`. If `whole_tuple` is set, that just returns `encodes` (or `decodes` if not `is_enc`). Otherwise we find the first annotated param with `_p1_anno` and check if `x` is an instance of that (if not `is_listy(x)`). If it is, we return the function (encodes/decodes), otherwise None. `call` then passes on to `_do_call` which does nothing if function is `None`. If `x` is listy, then we return a *list* of {functions or `None`}, and a list of results from `_do_call` for each function is returned.
###Code
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
`Transform` can be used as a decorator, to turn a function into a `Transform`.
###Code
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
You can derive from `Transform` and use either `_` or `encodes` for your encoding function.
###Code
class A(Transform):
def _(self, x:TensorImage): return -x
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
f
###Output
_____no_output_____
###Markdown
Without return annotation we get an `Int` back since that's what was passed.
###Code
class A(Transform): pass
@A
def _(self, x:Int): return x//2 # `_` is an abbreviation for `encodes`
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
###Output
_____no_output_____
###Markdown
Without return annotation we don't cast if we're not a subclass of the input type.
###Code
class A(Transform):
def encodes(self, x:Int): return x/2
def _(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
###Output
_____no_output_____
###Markdown
With return annotation `None` we get back whatever Python creates usually.
###Code
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
###Output
_____no_output_____
###Markdown
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
###Code
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
###Output
_____no_output_____
###Markdown
If the transform has `filt` then it's only applied if `filt` param matches.
###Code
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
###Output
_____no_output_____
###Markdown
TupleTransform
###Code
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def _(self, x:int): return x+1
@B
def _(self, x:str): return x+'1'
@B
def _(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
###Output
_____no_output_____
###Markdown
Non-type-constrained functions are applied to all elements of a tuple.
###Code
class A(TupleTransform): pass
@A
def _(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
###Output
_____no_output_____
###Markdown
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
###Code
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
###Output
_____no_output_____
###Markdown
The same behavior also works with `typing` module type classes.
###Code
class A(Transform): pass
@A
def _(self, x:numbers.Integral): return x+1
@A
def _(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
###Output
_____no_output_____
###Markdown
Transform accepts lists
###Code
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_test_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted notebook2jekyll.ipynb.
Converted tmp.ipynb.
###Markdown
Transforms Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> Tuple[float,float]: return x
test_eq(anno_ret(f), Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p1_anno(f):
"Get the annotation of first param of `f`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
return ann[0] if ann else object
def _f(a, b): pass
test_eq(_p1_anno(_f), object)
def _f(a, b)->str: pass
test_eq(_p1_anno(_f), object)
def _f(a, b:str)->float: pass
test_eq(_p1_anno(_f), str)
def _f(a:int, b:int)->float: pass
test_eq(_p1_anno(_f), int)
def _f(a:int, b:str)->float: pass
test_eq(_p1_anno(_f), int)
test_eq(_p1_anno(attrgetter('foo')), object)
###Output
_____no_output_____
###Markdown
Types `TensorImage`, `TensorImageBW` and `TensorMask` are subclasses of `torch.Tensor` that know how to show themselves.
###Code
#export
@delegates(plt.subplots, keep=True)
def subplots(nrows=1, ncols=1, **kwargs):
fig,ax = plt.subplots(nrows,ncols,**kwargs)
if nrows*ncols==1: ax = array([ax])
return fig,ax
#export
class TensorImageBase(TensorBase):
_show_args = {'cmap':'viridis'}
def show(self, ctx=None, **kwargs):
return show_image(self, ctx=ctx, **{**self._show_args, **kwargs})
def get_ctxs(self, max_n=10, rows=None, cols=None, figsize=None, **kwargs):
n_samples = min(self.shape[0], max_n)
rows = rows or int(np.ceil(math.sqrt(n_samples)))
cols = cols or int(np.ceil(math.sqrt(n_samples)))
figsize = (cols*3, rows*3) if figsize is None else figsize
_,axs = subplots(rows, cols, figsize=figsize)
return axs.flatten()
#export
class TensorImage(TensorImageBase): pass
#export
class TensorImageBW(TensorImage): _show_args = {'cmap':'Greys'}
#export
class TensorMask(TensorImageBase): _show_args = {'alpha':0.5, 'cmap':'tab20'}
im = Image.open(TEST_IMAGE)
im_t = TensorImage(array(im))
test_eq(type(im_t), TensorImage)
im_t2 = TensorMask(tensor(1))
test_eq(type(im_t2), TensorMask)
test_eq(im_t2, tensor(1))
ax = im_t.show(figsize=(2,2))
test_fig_exists(ax)
#hide
axes = im_t.get_ctxs(1)
test_eq(axes.shape,[1])
plt.close()
axes = im_t.get_ctxs(4)
test_eq(axes.shape,[4])
plt.close()
###Output
_____no_output_____
###Markdown
TypeDispatch - The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).
###Code
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs,self.cache = {},{}
for f in funcs: self.add(f)
self.inst = None
def _reset(self):
self.funcs = {k:self.funcs[k] for k in sorted(self.funcs, key=cmp_instance, reverse=True)}
self.cache = {**self.funcs}
def add(self, f):
"Add type `t` and function `f`"
self.funcs[_p1_anno(f) or object] = f
self._reset()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def __repr__(self): return str({getattr(k,'__name__',str(k)):v.__name__ for k,v in self.funcs.items()})
def __call__(self, x, *args, **kwargs):
f = self[type(x)]
if not f: return x
if self.inst is not None: f = types.MethodType(f, self.inst)
return f(x, *args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
if k in self.cache: return self.cache[k]
types = [f for f in self.funcs if issubclass(k,f)]
res = self.funcs[types[0]] if types else None
self.cache[k] = res
return res
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_bti(x:TensorMask): return x
def f_fti(x:TensorImage): return x
def f_bll(x:bool): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_fti,f_num,f_bti,f_bll)
test_eq(t[int], f_nin)
test_eq(t[str], None)
test_eq(t[TensorImage], f_fti)
test_eq(t[float], f_num)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[int], f_nin)
test_eq(t(1), 2)
test_eq(t.returns(1), int)
t
def m_nin(self, x:numbers.Integral): return x+1
def m_bll(self, x:bool): self.foo='a'
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), 2)
test_eq(a.f(1.), 1.)
a.f(False)
test_eq(a.foo, 'a')
###Output
_____no_output_____
###Markdown
Transform -
###Code
#export
_tfm_methods = 'encodes','decodes','setups'
class _TfmDict(dict):
def __setitem__(self,k,v):
if k not in _tfm_methods or not isinstance(v,Callable): return super().__setitem__(k,v)
if k not in self: super().__setitem__(k,TypeDispatch())
res = self[k]
res.add(v)
#export
class _TfmMeta(type):
def __new__(cls, name, bases, dict):
res = super().__new__(cls, name, bases, dict)
res.__signature__ = inspect.signature(res.__init__)
return res
def __call__(cls, *args, **kwargs):
f = args[0] if args else None
n = getattr(f,'__name__',None)
for nm in _tfm_methods:
if not hasattr(cls,nm): setattr(cls, nm, TypeDispatch())
if isinstance(f,Callable) and n in _tfm_methods:
getattr(cls,n).add(f)
return f
return super().__call__(*args, **kwargs)
@classmethod
def __prepare__(cls, name, bases): return _TfmDict()
#export
class Transform(metaclass=_TfmMeta):
"Delegates (`__call__`,`decode`,`setup`) to (`encodes`,`decodes`,`setups`) if `filt` matches"
filt,init_enc,as_item_force,as_item,order = None,False,None,True,0
def __init__(self, enc=None, dec=None, filt=None, as_item=False):
self.filt,self.as_item = ifnone(filt, self.filt),as_item
self.init_enc = enc or dec
if not self.init_enc: return
# Passing enc/dec, so need to remove (base) class level enc/dec
del(self.__class__.encodes,self.__class__.decodes,self.__class__.setups)
self.encodes,self.decodes,self.setups = TypeDispatch(),TypeDispatch(),TypeDispatch()
if enc:
self.encodes.add(enc)
self.order = getattr(self.encodes,'order',self.order)
if dec: self.decodes.add(dec)
@property
def use_as_item(self): return ifnone(self.as_item_force, self.as_item)
def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
def setup(self, items=None): return self.setups(items)
def __repr__(self): return f'{self.__class__.__name__}: {self.use_as_item} {self.encodes} {self.decodes}'
def _call(self, fn, x, filt=None, **kwargs):
if filt!=self.filt and self.filt is not None: return x
f = getattr(self, fn)
if self.use_as_item or not is_listy(x): return self._do_call(f, x, **kwargs)
res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
return retain_type(res, x)
def _do_call(self, f, x, **kwargs):
return x if f is None else retain_type(f(x, **kwargs), x, f.returns_none(x))
add_docs(Transform, decode="Delegate to `decodes` to undo transform", setup="Delegate to `setups` to set up transform")
show_doc(Transform)
###Output
_____no_output_____
###Markdown
A `Transform` is the main building block of the fastai data pipelines. In the most general terms a transform can be any function you want to apply to your data, however the `Transform` class provides several mechanisms that make the process of building them easy and flexible. The main `Transform` features:- **Type dispatch** - Type annotations are used to determine if a transform should be applied to the given argument. It also gives an option to provide several implementations and it choses the one to run based on the type. This is useful for example when running both independent and dependent variables through the pipeline where some transforms only make sense for one and not the other. Another usecase is designing a transform that handles different data formats. Note that if a transform takes multiple arguments only the type of the first one is used for dispatch. - **Handling of tuples** - When a tuple (or another collection satisfying `is_listy`) of data is passed to a transform it will get applied to each element separately. Most comonly it will be a *(x,y)* tuple, but it can be anything for example a list of images. You can opt out of this behavior by setting the flag `as_item=True`. For transforms that must always operate on the tuple level you can set `as_item_force=True` which takes precedence over `as_item`, an example of that is `PointScaler`.- **Reversability** - A transform can be made reversible by implementing the `decodes` method. This is mainly used to turn something like a category which is encoded as a number back into a label understandable by humans for showing purposes.- **Type propagation** - Whenever possible a transform tries to return data of the same type it received. Mainly used to maintain semantics of things like `TensorImage` which is a thin wrapper of pytorches `Tensor`. You can opt out of this behavior by adding `->None` return type annotation.- **Preprocessing** - The `setup` method can be used to perform any one-time calculations to be later used by the transform, for example generating a vocabulary to encode categorical data.- **Filtering based on the dataset type** - By setting the `filt` flag you can make the transform be used only in a specific `DataSource` subset like in training, but not validation.- **Ordering** - You can set the `order` attribute which the `Pipeline` uses when it needs to merge two lists of transforms.- **Appending new behavior with decorators** - You can easily extend an existing `Transform` by creating `encodes` or `decodes` methods for new data types. You can put those new methods outside the original transform definition and decorate them with the class you wish them patched into. This can be used by the fastai library users to add their own behavior, or multiple modules contributing to the same transform. Defining a `Transform`There are a few ways to create a transform with different ratios of simplicity to flexibility.- **Extending the `Transform` class** - Use inheritence to implement the methods you want.- **Passing methods to the constructor** - Instantiate the `Transform` class and pass your functions as `enc` and `dec` arguments.- **@Transform decorator** - Turn any function into a `Transform` by just adding a decorator - very straightforward if all you need is a single `encodes` implementation.- **Passing a function to fastai APIs** - Same as above, but when passing a function to other transform aware classes like `Pipeline` or `TfmdDS` you don't even need a decorator. Your function will get converted to a `Transform` automatically.
###Code
class A(Transform): pass
@A
def encodes(self, x): return x+1
f1 = A()
test_eq(f1(1), 2)
class B(A): pass
f2 = B()
test_eq(f2(1), 2)
class A(Transform): pass
f3 = A()
test_eq_type(f3(2), 2)
test_eq_type(f3.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
`Transform` can be used as a decorator, to turn a function into a `Transform`.
###Code
f = Transform(lambda o:o//2)
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
@Transform
def f(x): return x//2
test_eq_type(f(2), 1)
test_eq_type(f.decode(2.0), 2.0)
###Output
_____no_output_____
###Markdown
You can derive from `Transform` and use `encodes` for your encoding function.
###Code
class A(Transform):
def encodes(self, x:TensorImage): return -x
def decodes(self, x:TensorImage): return x+1
def setups (self, x:TensorImage): x.foo = 'a'
f = A()
t = f(im_t)
test_eq(t, -im_t)
test_eq(f(1), 1)
test_eq(type(t), TensorImage)
test_eq(f.decode(t), -im_t+1)
test_eq(f.decode(1), 1)
f.setup(im_t)
test_eq(im_t.foo, 'a')
t2 = tensor(1)
f.setup(t2)
assert not hasattr(f2,'foo')
f
###Output
_____no_output_____
###Markdown
Without return annotation we get an `Int` back since that's what was passed.
###Code
class A(Transform): pass
@A
def encodes(self, x:Int): return x//2
@A
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), Int(1))
test_eq_type(f(2), 2)
test_eq_type(f(2.), 3.)
###Output
_____no_output_____
###Markdown
Without return annotation we don't cast if we're not a subclass of the input type.
###Code
class A(Transform):
def encodes(self, x:Int): return x/2
def encodes(self, x:float): return x+1
f = A()
test_eq_type(f(Int(2)), 1.)
test_eq_type(f(2), 2)
test_eq_type(f(Float(2.)), Float(3.))
###Output
_____no_output_____
###Markdown
With return annotation `None` we get back whatever Python creates usually.
###Code
def func(x)->None: return x/2
f = Transform(func)
test_eq_type(f(2), 1.)
test_eq_type(f(2.), 1.)
###Output
_____no_output_____
###Markdown
Since `decodes` has no return annotation, but `encodes` created an `Int` and we pass that result here to `decode`, we end up with an `Int`.
###Code
def func(x): return Int(x+1)
def dec (x): return x-1
f = Transform(func,dec)
t = f(1)
test_eq_type(t, Int(2))
test_eq_type(f.decode(t), Int(1))
###Output
_____no_output_____
###Markdown
If the transform has `filt` then it's only applied if `filt` param matches.
###Code
f.filt = 1
test_eq(f(1, filt=1),2)
test_eq_type(f(1, filt=0), 1)
###Output
_____no_output_____
###Markdown
If `as_item=True` the transform takes tuples as a whole and is applied to them.
###Code
class A(Transform):
def encodes(self, xy): x,y=xy; return (x+y,y)
def decodes(self, xy): x,y=xy; return (x-y,y)
f = A(as_item=True)
t = f((1,2))
test_eq(t, (3,2))
test_eq(f.decode(t), (1,2))
f.filt = 1
test_eq(f((1,2), filt=1), (3,2))
test_eq(f((1,2), filt=0), (1,2))
class AL(Transform): pass
@AL
def encodes(self, x): return L(x_+1 for x_ in x)
@AL
def decodes(self, x): return L(x_-1 for x_ in x)
f = AL(as_item=True)
t = f([1,2])
test_eq(t, [2,3])
test_eq(f.decode(t), [1,2])
###Output
_____no_output_____
###Markdown
If `as_item=False` the transform is applied to each element of a listy input.
###Code
def neg_int(x:numbers.Integral): return -x
f = Transform(neg_int, as_item=False)
test_eq(f([1]), (-1,))
test_eq(f([1.]), (1.,))
test_eq(f([1.,2,3.]), (1.,-2,3.))
test_eq(f.decode([1,2]), (1,2))
#export
class InplaceTransform(Transform):
"A `Transform` that modifies in-place and just returns whatever it's passed"
def _call(self, fn, x, filt=None, **kwargs):
super()._call(fn,x,filt,**kwargs)
return x
###Output
_____no_output_____
###Markdown
TupleTransform
###Code
#export
class TupleTransform(Transform):
"`Transform` that always treats `as_item` as `False`"
as_item_force=False
#export
class ItemTransform (Transform):
"`Transform` that always treats `as_item` as `True`"
as_item_force=True
def float_to_int(x:(float,int)): return Int(x)
f = TupleTransform(float_to_int)
test_eq_type(f([1.]), (Int(1),))
test_eq_type(f([1]), (Int(1),))
test_eq_type(f(['1']), ('1',))
test_eq_type(f([1,'1']), (Int(1),'1'))
test_eq(f.decode([1]), [1])
test_eq_type(f(TupleBase(1.)), TupleBase(Int(1)))
class B(TupleTransform): pass
class C(TupleTransform): pass
f = B()
test_eq(f([1]), [1])
@B
def encodes(self, x:int): return x+1
@B
def encodes(self, x:str): return x+'1'
@B
def encodes(self, x)->None: return str(x)+'!'
b,c = B(),C()
test_eq(b([1]), [2])
test_eq(b(['1']), ('11',))
test_eq(b([1.0]), ('1.0!',))
test_eq(c([1]), [1])
test_eq(b([1,2]), (2,3))
test_eq(b.decode([2]), [2])
assert pickle.loads(pickle.dumps(b))
@B
def decodes(self, x:int): return x-1
test_eq(b.decode([2]), [1])
test_eq(b.decode(('2',)), ('2',))
###Output
_____no_output_____
###Markdown
Non-type-constrained functions are applied to all elements of a tuple.
###Code
class A(TupleTransform): pass
@A
def encodes(self, x): return x+1
@A
def decodes(self, x): return x-1
f = A()
t = f((1,2.0))
test_eq_type(t, (2,3.0))
test_eq_type(f.decode(t), (1,2.0))
###Output
_____no_output_____
###Markdown
Type-constrained functions are applied to only matching elements of a tuple, and return annotations are only applied where matching.
###Code
class B(TupleTransform):
def encodes(self, x:int): return Int(x+1)
def encodes(self, x:str): return x+'1'
def decodes(self, x:Int): return x//2
f = B()
start = (1.,2,'3')
t = f(start)
test_eq_type(t, (1.,Int(3),'31'))
test_eq(f.decode(t), (1.,Int(1),'31'))
###Output
_____no_output_____
###Markdown
The same behavior also works with `typing` module type classes.
###Code
class A(Transform): pass
@A
def encodes(self, x:numbers.Integral): return x+1
@A
def encodes(self, x:float): return x*3
@A
def decodes(self, x:int): return x-1
f = A()
start = 1.0
t = f(start)
test_eq(t, 3.)
test_eq(f.decode(t), 3)
f = A(as_item=False)
start = (1.,2,3.)
t = f(start)
test_eq(t, (3.,3,9.))
test_eq(f.decode(t), (3.,2,9.))
###Output
_____no_output_____
###Markdown
Transform accepts lists
###Code
def a(x): return L(x_+1 for x_ in x)
def b(x): return L(x_-1 for x_ in x)
f = TupleTransform(a,b)
t = f((L(1,2),))
test_eq(t, (L(2,3),))
test_eq(f.decode(t), (L(1,2),))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_torch_core.ipynb.
Converted 01b_script.ipynb.
Converted 01c_dataloader.ipynb.
Converted 02_data_transforms.ipynb.
Converted 03_data_pipeline.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_source.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
Converted 50_data_block.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_index.ipynb.
Converted 95_utils_test.ipynb.
Converted 96_data_external.ipynb.
Converted Untitled.ipynb.
Converted notebook2jekyll.ipynb.
|
examples/GluonTS_SageMaker_SDK_Tutorial.ipynb | ###Markdown
GluonTS SageMaker SDK Tutorial ***This notebook is meant to be uploaded to a SageMaker notebook instance and executed there. As a kernel choose `conda_mxnet_p36`*** ***In this how-to tutorial we will train a SimpleFeedForwardEstimator on the m4_hourly dataset on AWS SageMaker using the GluonTSFramework, and later review its performance. At the very end you will see how to launch your custom training script.*** ***In the end you should know how to train any GluonEstimator on any Dataset on SageMaker using the GluonTSFramework train(...) method, and how to run your own script using the run(...) method.*** Notebook Setup Currently, *GluonTSFramework* is only available through the master branch of *GluonTS*, so we install it with the required dependencies first:
###Code
!pip install --upgrade mxnet==1.6 git+https://github.com/awslabs/gluon-ts.git#egg=gluonts[dev]
# Third-party requirements
import boto3
import sagemaker
from pathlib import Path
import tempfile
# First-party requirements
from gluonts.nursery.sagemaker_sdk.estimator import GluonTSFramework
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.mx.trainer import Trainer
###Output
_____no_output_____
###Markdown
Credentials & Configuration Since we are executing this tutorial on a SageMaker notebook instance, many parameters that we would usually need to predefine manually we can just retrieve from the environment. In order to highlight how you would have to set these parameters when you are executing a notebook like this on you local machine take a look at the cell output:
###Code
temp_session = boto3.session.Session()
temp_sagemaker_session = sagemaker.session.Session(boto_session=temp_session)
bucket_name = f"s3://{temp_sagemaker_session.default_bucket()}"
print(f"bucket_name = '{bucket_name}'")
region_name = temp_session.region_name
print(f"region_name = '{region_name}'")
profile_name = temp_session.profile_name
print(f"profile_name = '{profile_name}'")
iam_role = sagemaker.get_execution_role()
print(f"iam_role = '{iam_role}'")
###Output
_____no_output_____
###Markdown
Remember that in order to be able to use the profile 'defult' (or any other profile) on your local machine you must have correctly set up your [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html). Additionally, the specified bucket needs to be actually present in the specified region. With this out of the way, we can continue as if we had set the above variables manually. Experimental Setup Experiment directory First, we should define the *S3 parent folder location* which will later contain the folder with all the data generated during the experiment (model artifacts, custom scripts, dependencies etc.). I you choose to use a subfolder for your experiments (like we do here) the folder does not have to exist yet, but it's name must satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*. If not specified, the default bucket of the specified region itself will be used.
###Code
experiment_parent_dir = bucket_name + "/my-sagemaker-experiments"
print(f"experiment_parent_dir = '{experiment_parent_dir}'")
###Output
_____no_output_____
###Markdown
SageMaker session Next, we need to create a sagemaker session in our region using a [*boto3*](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.htmlusing-boto-3) session with our credentials (profile).
###Code
boto_session = boto3.session.Session(profile_name=profile_name, region_name=region_name)
sagemaker_session = sagemaker.session.Session(boto_session=boto_session)
###Output
_____no_output_____
###Markdown
AWS IAM role We also need to provide an AWS [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) role, with which to access the resources on our account.
###Code
role = iam_role
###Output
_____no_output_____
###Markdown
Training image & instance type We can just use one of the prebuilt SageMaker [ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html) images and install the gluonts version we prefer dynamically though the 'requirements.txt'.
###Code
general_instance_type = "cpu"
# instance_type = "gpu" # alternative
###Output
_____no_output_____
###Markdown
Depending our *general_instance_type* choice we will have to select an appropriate concrete 'instance type':
###Code
instance_type = "ml.c5.xlarge" if general_instance_type == "cpu" else "ml.p2.xlarge"
###Output
_____no_output_____
###Markdown
and an appropriate prebuilt mxnet image (we will take the training images here):
###Code
if general_instance_type == "cpu":
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-cpu-py36-ubuntu16.04"
else:
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-gpu-py36-cu101-ubuntu16.04"
print(f"docker_image = '{docker_image}'")
###Output
_____no_output_____
###Markdown
Base job description We can give our training job a base name that lets us easily identify experiments of the same type. It has to satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*
###Code
base_job_description = "my-sagemaker-experiment-intro"
###Output
_____no_output_____
###Markdown
Dataset Here we have two choices; we can either pick a built in dataset provided by GluonTS or any dataset in the GluonTS dataset format located on S3, which would look like this: >dataset_name> |---> train> | |--> data.json> |---> test> | |--> data.json> |---> metadata.json Since we haven't uploaded any, lets pick a provided one for now. The following datasets are available:
###Code
print(dataset_recipes.keys())
###Output
_____no_output_____
###Markdown
How about "m4_hourly"?:
###Code
dataset_name = "m4_hourly"
# dataset_name = "s3://<your-custom-dataset-location>" # if using a custom dataset
###Output
_____no_output_____
###Markdown
We will need to know the *prediction_length* and *freq* of the dataset to define our SimpleFeedForwardEstimator, so lets keep track of them:
###Code
freq = dataset_recipes[dataset_name].keywords["pandas_freq"]
prediction_length = dataset_recipes[dataset_name].keywords["prediction_length"]
###Output
_____no_output_____
###Markdown
Requirements and Dependencies We will additionally have to specify a 'requirements.txt' file where we specify which GluonTS version we want to use. Here we will create a temporary requirements file, but you can just have a 'requirements.txt' file in the folder where you launch your experiments.
###Code
requirements_dot_txt_file_name = "requirements.txt"
requirements_dot_txt_file_content = "git+https://github.com/awslabs/gluon-ts.git"
# only using temporary directory for demonstration
temp_dir = tempfile.TemporaryDirectory()
temp_dir_path = Path(temp_dir.name)
# create the requirements.txt file
with open(temp_dir_path / requirements_dot_txt_file_name, "w") as req_file: # has to be called requirements.txt
req_file.write(requirements_dot_txt_file_content)
my_requirements_txt_file_path = str(temp_dir_path / requirements_dot_txt_file_name)
print(f"my_requirements_txt_file_path = '{my_requirements_txt_file_path}'")
###Output
_____no_output_____
###Markdown
Define the Estimator Now we define the Estimator we want to train, which can be any GluonEstimator (except ) with any hyperparameter.
###Code
my_estimator = SimpleFeedForwardEstimator(
prediction_length=prediction_length,
freq=freq,
trainer=Trainer(ctx=general_instance_type, epochs=5) # optional
)
###Output
_____no_output_____
###Markdown
Launch the Experiment
###Code
my_experiment = GluonTSFramework(
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
)
###Output
_____no_output_____
###Markdown
And finally we call the *train* method to train our estimator, where we just specify our dataset and estimator:
###Code
results = my_experiment.train(dataset=dataset_name, estimator=my_estimator)
###Output
_____no_output_____
###Markdown
Review the Results The 'train(...)' function returnes a TrainResult which consists of the following fields:
###Code
print(results._fields)
###Output
_____no_output_____
###Markdown
So we could use the predictor straight away to predict on some additional data if we would like. We can also inspect our training history and monitored metrics (like resource consumption or epoch loss) on SageMaker under "Training/Training jobs" here:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{results.job_name}")
###Output
_____no_output_____
###Markdown
Or take a look at the metrics right here:
###Code
results.metrics[0]
###Output
_____no_output_____
###Markdown
Or head to our bucket to download the model artifacts:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{results.job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Run a custom python script There process to run a custom python script is not much different, however, you will have to adapt your usual python script to particularities of the SageMaker.
###Code
import os
import gluonts
import s3fs
###Output
_____no_output_____
###Markdown
Writing a custom script Your custom script has to adhere to a rough format, for this reason we provide the "run_entry_point.py" script with GluonTS under:
###Code
run_entry_point_path = (
Path(os.path.dirname(gluonts.__file__))
/ "nursery"
/ "sagemaker_sdk"
/ "entry_point_scripts"
/ "run_entry_point.py"
)
###Output
_____no_output_____
###Markdown
Lets take a look:
###Code
with open(run_entry_point_path, 'r') as script:
print(script.read())
###Output
_____no_output_____
###Markdown
As we can see, there is a *run* method, whithin which we are supposed to write our custom code. Additionally, at the bottom we might need to parse additional arguments that we provide for example through the "inputs" parameter of the GluonTSFramework.run(...) method. The "inputs" parameter cannot be empty, due to the restrictions of the Framework baseclass of the GluonTSFramework, however, you can pass an empty file located on S3 as dummy input. Lets define a path for the dummy file:
###Code
dummy_s3_file_path = bucket_name + "/dummy_1234"
print(f"dummy_s3_file_path = '{dummy_s3_file_path}'")
###Output
_____no_output_____
###Markdown
Lets create the S3 file (if the file already exists you will have to set overwrite to 'True', or choose a different path for the dummy file):
###Code
overwrite = False
s3 = s3fs.S3FileSystem(anon=False) # uses default credentials
if not(s3.exists(dummy_s3_file_path)) or overwrite:
with s3.open(dummy_s3_file_path, 'w') as f:
f.write("This is a dummy file.")
print("Dummy file created!")
else:
print("No dummy file created!")
my_inputs = {'my_dataset_name': sagemaker.s3_input(dummy_s3_file_path, content_type='application/json')}
###Output
_____no_output_____
###Markdown
If we were to pass a dataset location as input as defined above, we would have to parse the location of that dataset (which will be uploaded into the container environment) for example like this: > parser.add_argument('--my_fancy_dataset', type=str, default=os.environ['SM_CHANNEL_MY_DATASET_NAME']) Prepending "SM_CHANNEL_" and converting the name to all caps is necessary. Within the *run(...)* method the location will be accessible by: > arguments.my_fancy_dataset Any additional "hyperparameter" you provide to *GluonTSFramework.run(...)* are already parsed by: >parser.add_argument("--sm-hps", type=json.loads, default=os.environ["SM_HPS"]) Get familiar tasks: For now, we will only use the unmodified run script, however, a good exercise to get familiar with the framework would be to modify the script so:* You parse the location of the input we provide thourgh "my_inputs" * You read the dummy file inside the run(...) method* You write the content of the file to a new file called "parsed.txt" and save it to the output location * You check in S3 that "parsed.txt" was saved to S3 in your experiment folder under /output/output.tar.gzHINT: you don't need to write or read form S3 explicitly, but rather access the appropriate local location through "arguments" of the run(...) method within your scripts; let SageMaker containers handle the interaction with S3. HINT: you can take a look at the "train_entry_point.py" to see an actual example for a training script. Run the Experiment As we will see, the arguments to the GluonTSFramework run(...) method are almost identical to the train(...) one, however, we additionally specify the required "entry_point" and "inputs", and optionally "wait=False" because we might want to launch multiple jobs async.
###Code
my_experiment, my_job_name = GluonTSFramework.run(
entry_point=str(run_entry_point_path), # additionally required
inputs = my_inputs, # additionally required
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
wait=False # optional
)
###Output
_____no_output_____
###Markdown
We can take a look at the training job right away:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{my_job_name}")
###Output
_____no_output_____
###Markdown
And again, check out the corresponding S3 location:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{my_job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Custom GluonTS version: In case you are modifying GluonTS on your local machine and want to run experiments on your custom version, just import GluonTS and define: >gluont_ts_path = Path(gluonts.__path__[0]) >gluont_ts_requirements_path = gluont_ts_path.parent.parent / "requirements" / "requirements.txt" and change the dependencies argument of run(...) or train(...) the following way: > dependencies=[gluont_ts_requirements_path, gluont_ts_path] Cleanup Lets just clean up the temporary directory:
###Code
temp_dir.cleanup()
###Output
_____no_output_____
###Markdown
GluonTS SageMaker SDK Tutorial ***This notebook is meant to be uploaded to a SageMaker notebook instance and executed there. As a kernel choose `conda_mxnet_p36`*** ***In this how-to tutorial we will train a SimpleFeedForwardEstimator on the m4_hourly dataset on AWS SageMaker using the GluonTSFramework, and later review its performance. At the very end you will see how to launch your custom training script.*** ***In the end you should know how to train any GluonEstimator on any Dataset on SageMaker using the GluonTSFramework train(...) method, and how to run your own script using the run(...) method.*** Notebook Setup Currently, *GluonTSFramework* is only available through the master branch of *GluonTS*, so we install it with the required dependencies first:
###Code
!pip install --upgrade mxnet==1.6 git+https://github.com/awslabs/gluon-ts.git#egg=gluonts[dev]
# Third-party requirements
import boto3
import sagemaker
from pathlib import Path
import tempfile
# First-party requirements
from gluonts.nursery.sagemaker_sdk.estimator import GluonTSFramework
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.mx.trainer import Trainer
###Output
_____no_output_____
###Markdown
Credentials & Configuration Since we are executing this tutorial on a SageMaker notebook instance, many parameters that we would usually need to predefine manually we can just retrieve from the environment. In order to highlight how you would have to set these parameters when you are executing a notebook like this on you local machine take a look at the cell output:
###Code
temp_session = boto3.session.Session()
temp_sagemaker_session = sagemaker.session.Session(boto_session=temp_session)
bucket_name = f"s3://{temp_sagemaker_session.default_bucket()}"
print(f"bucket_name = '{bucket_name}'")
region_name = temp_session.region_name
print(f"region_name = '{region_name}'")
profile_name = temp_session.profile_name
print(f"profile_name = '{profile_name}'")
iam_role = sagemaker.get_execution_role()
print(f"iam_role = '{iam_role}'")
###Output
_____no_output_____
###Markdown
Remember that in order to be able to use the profile 'defult' (or any other profile) on your local machine you must have correctly set up your [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html). Additionally, the specified bucket needs to be actually present in the specified region. With this out of the way, we can continue as if we had set the above variables manually. Experimental Setup Experiment directory First, we should define the *S3 parent folder location* which will later contain the folder with all the data generated during the experiment (model artifacts, custom scripts, dependencies etc.). I you choose to use a subfolder for your experiments (like we do here) the folder does not have to exist yet, but it's name must satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*. If not specified, the default bucket of the specified region itself will be used.
###Code
experiment_parent_dir = bucket_name + "/my-sagemaker-experiments"
print(f"experiment_parent_dir = '{experiment_parent_dir}'")
###Output
_____no_output_____
###Markdown
SageMaker session Next, we need to create a sagemaker session in our region using a [*boto3*](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.htmlusing-boto-3) session with our credentials (profile).
###Code
boto_session = boto3.session.Session(profile_name=profile_name, region_name=region_name)
sagemaker_session = sagemaker.session.Session(boto_session=boto_session)
###Output
_____no_output_____
###Markdown
AWS IAM role We also need to provide an AWS [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) role, with which to access the resources on our account.
###Code
role = iam_role
###Output
_____no_output_____
###Markdown
Training image & instance type We can just use one of the prebuilt SageMaker [ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html) images and install the gluonts version we prefer dynamically though the 'requirements.txt'.
###Code
general_instance_type = "cpu"
# instance_type = "gpu" # alternative
###Output
_____no_output_____
###Markdown
Depending our *general_instance_type* choice we will have to select an appropriate concrete 'instance type':
###Code
instance_type = "ml.c5.xlarge" if general_instance_type == "cpu" else "ml.p2.xlarge"
###Output
_____no_output_____
###Markdown
and an appropriate prebuilt mxnet image (we will take the training images here):
###Code
if general_instance_type == "cpu":
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-cpu-py36-ubuntu16.04"
else:
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-gpu-py36-cu101-ubuntu16.04"
print(f"docker_image = '{docker_image}'")
###Output
_____no_output_____
###Markdown
Base job description We can give our training job a base name that lets us easily identify experiments of the same type. It has to satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*
###Code
base_job_description = "my-sagemaker-experiment-intro"
###Output
_____no_output_____
###Markdown
Dataset Here we have two choices; we can either pick a built in dataset provided by GluonTS or any dataset in the GluonTS dataset format located on S3, which would look like this: >dataset_name> |---> train> | |--> data.json> |---> test> | |--> data.json> |---> metadata.json Since we haven't uploaded any, lets pick a provided one for now. The following datasets are available:
###Code
print(dataset_recipes.keys())
###Output
_____no_output_____
###Markdown
How about "m4_hourly"?:
###Code
dataset_name = "m4_hourly"
# dataset_name = "s3://<your-custom-dataset-location>" # if using a custom dataset
###Output
_____no_output_____
###Markdown
We will need to know the *prediction_length* and *freq* of the dataset to define our SimpleFeedForwardEstimator, so lets keep track of them:
###Code
freq = dataset_recipes[dataset_name].keywords["pandas_freq"]
prediction_length = dataset_recipes[dataset_name].keywords["prediction_length"]
###Output
_____no_output_____
###Markdown
Requirements and Dependencies We will additionally have to specify a 'requirements.txt' file where we specify which GluonTS version we want to use. Here we will create a temporary requirements file, but you can just have a 'requirements.txt' file in the folder where you launch your experiments.
###Code
requirements_dot_txt_file_name = "requirements.txt"
requirements_dot_txt_file_content = "git+https://github.com/awslabs/gluon-ts.git"
# only using temporary directory for demonstration
temp_dir = tempfile.TemporaryDirectory()
temp_dir_path = Path(temp_dir.name)
# create the requirements.txt file
with open(temp_dir_path / requirements_dot_txt_file_name, "w") as req_file: # has to be called requirements.txt
req_file.write(requirements_dot_txt_file_content)
my_requirements_txt_file_path = str(temp_dir_path / requirements_dot_txt_file_name)
print(f"my_requirements_txt_file_path = '{my_requirements_txt_file_path}'")
###Output
_____no_output_____
###Markdown
Define the Estimator Now we define the Estimator we want to train, which can be any GluonEstimator (except ) with any hyperparameter.
###Code
my_estimator = SimpleFeedForwardEstimator(
prediction_length=prediction_length,
freq=freq,
trainer=Trainer(ctx=general_instance_type, epochs=5) # optional
)
###Output
_____no_output_____
###Markdown
Launch the Experiment
###Code
my_experiment = GluonTSFramework(
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
)
###Output
_____no_output_____
###Markdown
And finally we call the *train* method to train our estimator, where we just specify our dataset and estimator:
###Code
results = my_experiment.train(dataset=dataset_name, estimator=my_estimator)
###Output
_____no_output_____
###Markdown
Review the Results The 'train(...)' function returnes a TrainResult which consists of the following fields:
###Code
print(results._fields)
###Output
_____no_output_____
###Markdown
So we could use the predictor straight away to predict on some additional data if we would like. We can also inspect our training history and monitored metrics (like resource consumption or epoch loss) on SageMaker under "Training/Training jobs" here:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{results.job_name}")
###Output
_____no_output_____
###Markdown
Or take a look at the metrics right here:
###Code
results.metrics[0]
###Output
_____no_output_____
###Markdown
Or head to our bucket to download the model artifacts:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{results.job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Run a custom python script There process to run a custom python script is not much different, however, you will have to adapt your usual python script to particularities of the SageMaker.
###Code
import os
import gluonts
import s3fs
###Output
_____no_output_____
###Markdown
Writing a custom script Your custom script has to adhere to a rough format, for this reason we provide the "run_entry_point.py" script with GluonTS under:
###Code
run_entry_point_path = (
Path(os.path.dirname(gluonts.__file__))
/ "nursery"
/ "sagemaker_sdk"
/ "entry_point_scripts"
/ "run_entry_point.py"
)
###Output
_____no_output_____
###Markdown
Lets take a look:
###Code
with open(run_entry_point_path, 'r') as script:
print(script.read())
###Output
_____no_output_____
###Markdown
As we can see, there is a *run* method, whithin which we are supposed to write our custom code. Additionally, at the bottom we might need to parse additional arguments that we provide for example through the "inputs" parameter of the GluonTSFramework.run(...) method. The "inputs" parameter cannot be empty, due to the restrictions of the Framework baseclass of the GluonTSFramework, however, you can pass an empty file located on S3 as dummy input. Lets define a path for the dummy file:
###Code
dummy_s3_file_path = bucket_name + "/dummy_1234"
print(f"dummy_s3_file_path = '{dummy_s3_file_path}'")
###Output
_____no_output_____
###Markdown
Lets create the S3 file (if the file already exists you will have to set overwrite to 'True', or choose a different path for the dummy file):
###Code
overwrite = False
s3 = s3fs.S3FileSystem(anon=False) # uses default credentials
if not(s3.exists(dummy_s3_file_path)) or overwrite:
with s3.open(dummy_s3_file_path, 'w') as f:
f.write("This is a dummy file.")
print("Dummy file created!")
else:
print("No dummy file created!")
my_inputs = {'my_dataset_name': sagemaker.s3_input(dummy_s3_file_path, content_type='application/json')}
###Output
_____no_output_____
###Markdown
If we were to pass a dataset location as input as defined above, we would have to parse the location of that dataset (which will be uploaded into the container environment) for example like this: > parser.add_argument('--my_fancy_dataset', type=str, default=os.environ['SM_CHANNEL_MY_DATASET_NAME']) Prepending "SM_CHANNEL_" and converting the name to all caps is necessary. Within the *run(...)* method the location will be accessible by: > arguments.my_fancy_dataset Any additional "hyperparameter" you provide to *GluonTSFramework.run(...)* are already parsed by: >parser.add_argument("--sm-hps", type=json.loads, default=os.environ["SM_HPS"]) Get familiar tasks: For now, we will only use the unmodified run script, however, a good exercise to get familiar with the framework would be to modify the script so:* You parse the location of the input we provide thourgh "my_inputs" * You read the dummy file inside the run(...) method* You write the content of the file to a new file called "parsed.txt" and save it to the output location * You check in S3 that "parsed.txt" was saved to S3 in your experiment folder under /output/output.tar.gzHINT: you don't need to write or read form S3 explicitly, but rather access the appropriate local location through "arguments" of the run(...) method within your scripts; let SageMaker containers handle the interaction with S3. HINT: you can take a look at the "train_entry_point.py" to see an actual example for a training script. Run the Experiment As we will see, the arguments to the GluonTSFramework run(...) method are almost identical to the train(...) one, however, we additionally specify the required "entry_point" and "inputs", and optionally "wait=False" because we might want to launch multiple jobs async.
###Code
my_experiment, my_job_name = GluonTSFramework.run(
entry_point=str(run_entry_point_path), # additionally required
inputs = my_inputs, # additionally required
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
wait=False # optional
)
###Output
_____no_output_____
###Markdown
We can take a look at the training job right away:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{my_job_name}")
###Output
_____no_output_____
###Markdown
And again, check out the corresponding S3 location:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{my_job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Custom GluonTS version: In case you are modifying GluonTS on your local machine and want to run experiments on your custom version, just import GluonTS and define: >gluont_ts_path = Path(gluonts.__path__[0]) >gluont_ts_requirements_path = gluont_ts_path.parent.parent / "requirements" / "requirements.txt" and change the dependencies argument of run(...) or train(...) the following way: > dependencies=[gluont_ts_requirements_path, gluont_ts_path] Cleanup Lets just clean up the temporary directory:
###Code
temp_dir.cleanup()
###Output
_____no_output_____
###Markdown
GluonTS SageMaker SDK Tutorial ***This notebook is meant to be uploaded to a SageMaker notebook instance and executed there. As a kernel choose `conda_mxnet_p36`*** ***In this how-to tutorial we will train a SimpleFeedForwardEstimator on the m4_hourly dataset on AWS SageMaker using the GluonTSFramework, and later review its performance. At the very end you will see how to launch your custom training script.*** ***In the end you should know how to train any GluonEstimator on any Dataset on SageMaker using the GluonTSFramework train(...) method, and how to run your own script using the run(...) method.*** Notebook Setup Currently, *GluonTSFramework* is only available through the master branch of *GluonTS*, so we install it with the required dependencies first:
###Code
!pip install --upgrade mxnet==1.6 git+https://github.com/awslabs/gluon-ts.git#egg=gluonts[dev]
# Third-party requirements
import boto3
import sagemaker
from pathlib import Path
import tempfile
# First-party requirements
from gluonts.nursery.sagemaker_sdk.estimator import GluonTSFramework
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.mx.trainer import Trainer
###Output
_____no_output_____
###Markdown
Credentials & Configuration Since we are executing this tutorial on a SageMaker notebook instance, many parameters that we would usually need to predefine manually we can just retrieve from the environment. In order to highlight how you would have to set these parameters when you are executing a notebook like this on you local machine take a look at the cell output:
###Code
temp_session = boto3.session.Session()
temp_sagemaker_session = sagemaker.session.Session(boto_session=temp_session)
bucket_name = f"s3://{temp_sagemaker_session.default_bucket()}"
print(f"bucket_name = '{bucket_name}'")
region_name = temp_session.region_name
print(f"region_name = '{region_name}'")
profile_name = temp_session.profile_name
print(f"profile_name = '{profile_name}'")
iam_role = sagemaker.get_execution_role()
print(f"iam_role = '{iam_role}'")
###Output
_____no_output_____
###Markdown
Remember that in order to be able to use the profile 'defult' (or any other profile) on your local machine you must have correctly set up your [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html). Additionally, the specified bucket needs to be actually present in the specified region. With this out of the way, we can continue as if we had set the above variables manually. Experimental Setup Experiment directory First, we should define the *S3 parent folder location* which will later contain the folder with all the data generated during the experiment (model artifacts, custom scripts, dependencies etc.). I you choose to use a subfolder for your experiments (like we do here) the folder does not have to exist yet, but it's name must satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*. If not specified, the default bucket of the specified region itself will be used.
###Code
experiment_parent_dir = bucket_name + "/my-sagemaker-experiments"
print(f"experiment_parent_dir = '{experiment_parent_dir}'")
###Output
_____no_output_____
###Markdown
SageMaker session Next, we need to create a sagemaker session in our region using a [*boto3*](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.htmlusing-boto-3) session with our credentials (profile).
###Code
boto_session = boto3.session.Session(profile_name=profile_name, region_name=region_name)
sagemaker_session = sagemaker.session.Session(boto_session=boto_session)
###Output
_____no_output_____
###Markdown
AWS IAM role We also need to provide an AWS [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) role, with which to access the resources on our account.
###Code
role = iam_role
###Output
_____no_output_____
###Markdown
Training image & instance type We can just use one of the prebuilt SageMaker [ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html) images and install the gluonts version we prefer dynamically though the 'requirements.txt'.
###Code
general_instance_type = "cpu"
# instance_type = "gpu" # alternative
###Output
_____no_output_____
###Markdown
Depending our *general_instance_type* choice we will have to select an appropriate concrete 'instance type':
###Code
instance_type = "ml.c5.xlarge" if general_instance_type == "cpu" else "ml.p2.xlarge"
###Output
_____no_output_____
###Markdown
and an appropriate prebuilt mxnet image (we will take the training images here):
###Code
if general_instance_type == "cpu":
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-cpu-py36-ubuntu16.04"
else:
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-gpu-py36-cu101-ubuntu16.04"
print(f"docker_image = '{docker_image}'")
###Output
_____no_output_____
###Markdown
Base job description We can give our training job a base name that lets us easily identify experiments of the same type. It has to satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*
###Code
base_job_description = "my-sagemaker-experiment-intro"
###Output
_____no_output_____
###Markdown
Dataset Here we have two choices; we can either pick a built in dataset provided by GluonTS or any dataset in the GluonTS dataset format located on S3, which would look like this: >dataset_name> |---> train> | |--> data.json> |---> test> | |--> data.json> |---> metadata.json Since we haven't uploaded any, lets pick a provided one for now. The following datasets are available:
###Code
print(dataset_recipes.keys())
###Output
_____no_output_____
###Markdown
How about "m4_hourly"?:
###Code
dataset_name = "m4_hourly"
# dataset_name = "s3://<your-custom-dataset-location>" # if using a custom dataset
###Output
_____no_output_____
###Markdown
We will need to know the *prediction_length* and *freq* of the dataset to define our SimpleFeedForwardEstimator, so lets keep track of them:
###Code
freq = dataset_recipes[dataset_name].keywords["pandas_freq"]
prediction_length = dataset_recipes[dataset_name].keywords["prediction_length"]
###Output
_____no_output_____
###Markdown
Requirements and Dependencies We will additionally have to specify a 'requirements.txt' file where we specify which GluonTS version we want to use. Here we will create a temporary requirements file, but you can just have a 'requirements.txt' file in the folder where you launch your experiments.
###Code
requirements_dot_txt_file_name = "requirements.txt"
requirements_dot_txt_file_content = "git+https://github.com/awslabs/gluon-ts.git"
# only using temporary directory for demonstration
temp_dir = tempfile.TemporaryDirectory()
temp_dir_path = Path(temp_dir.name)
# create the requirements.txt file
with open(temp_dir_path / requirements_dot_txt_file_name, "w") as req_file: # has to be called requirements.txt
req_file.write(requirements_dot_txt_file_content)
my_requirements_txt_file_path = str(temp_dir_path / requirements_dot_txt_file_name)
print(f"my_requirements_txt_file_path = '{my_requirements_txt_file_path}'")
###Output
_____no_output_____
###Markdown
Define the Estimator Now we define the Estimator we want to train, which can be any GluonEstimator (except ) with any hyperparameter.
###Code
my_estimator = SimpleFeedForwardEstimator(
prediction_length=prediction_length,
freq=freq,
trainer=Trainer(ctx=general_instance_type, epochs=5) # optional
)
###Output
_____no_output_____
###Markdown
Launch the Experiment
###Code
my_experiment = GluonTSFramework(
sagemaker_session=sagemaker_session,
role=role,
image_uri=docker_image,
base_job_name=base_job_description,
instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
)
###Output
_____no_output_____
###Markdown
And finally we call the *train* method to train our estimator, where we just specify our dataset and estimator:
###Code
results = my_experiment.train(dataset=dataset_name, estimator=my_estimator)
###Output
_____no_output_____
###Markdown
Review the Results The 'train(...)' function returnes a TrainResult which consists of the following fields:
###Code
print(results._fields)
###Output
_____no_output_____
###Markdown
So we could use the predictor straight away to predict on some additional data if we would like. We can also inspect our training history and monitored metrics (like resource consumption or epoch loss) on SageMaker under "Training/Training jobs" here:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{results.job_name}")
###Output
_____no_output_____
###Markdown
Or take a look at the metrics right here:
###Code
results.metrics[0]
###Output
_____no_output_____
###Markdown
Or head to our bucket to download the model artifacts:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{results.job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Run a custom python script There process to run a custom python script is not much different, however, you will have to adapt your usual python script to particularities of the SageMaker.
###Code
import os
import gluonts
import s3fs
###Output
_____no_output_____
###Markdown
Writing a custom script Your custom script has to adhere to a rough format, for this reason we provide the "run_entry_point.py" script with GluonTS under:
###Code
run_entry_point_path = (
Path(os.path.dirname(gluonts.__file__))
/ "nursery"
/ "sagemaker_sdk"
/ "entry_point_scripts"
/ "run_entry_point.py"
)
###Output
_____no_output_____
###Markdown
Lets take a look:
###Code
with open(run_entry_point_path, 'r') as script:
print(script.read())
###Output
_____no_output_____
###Markdown
As we can see, there is a *run* method, whithin which we are supposed to write our custom code. Additionally, at the bottom we might need to parse additional arguments that we provide for example through the "inputs" parameter of the GluonTSFramework.run(...) method. The "inputs" parameter cannot be empty, due to the restrictions of the Framework baseclass of the GluonTSFramework, however, you can pass an empty file located on S3 as dummy input. Lets define a path for the dummy file:
###Code
dummy_s3_file_path = bucket_name + "/dummy_1234"
print(f"dummy_s3_file_path = '{dummy_s3_file_path}'")
###Output
_____no_output_____
###Markdown
Lets create the S3 file (if the file already exists you will have to set overwrite to 'True', or choose a different path for the dummy file):
###Code
overwrite = False
s3 = s3fs.S3FileSystem(anon=False) # uses default credentials
if not(s3.exists(dummy_s3_file_path)) or overwrite:
with s3.open(dummy_s3_file_path, 'w') as f:
f.write("This is a dummy file.")
print("Dummy file created!")
else:
print("No dummy file created!")
my_inputs = {'my_dataset_name': sagemaker.TrainingInput(dummy_s3_file_path, content_type='application/json')}
###Output
_____no_output_____
###Markdown
If we were to pass a dataset location as input as defined above, we would have to parse the location of that dataset (which will be uploaded into the container environment) for example like this: > parser.add_argument('--my_fancy_dataset', type=str, default=os.environ['SM_CHANNEL_MY_DATASET_NAME']) Prepending "SM_CHANNEL_" and converting the name to all caps is necessary. Within the *run(...)* method the location will be accessible by: > arguments.my_fancy_dataset Any additional "hyperparameter" you provide to *GluonTSFramework.run(...)* are already parsed by: >parser.add_argument("--sm-hps", type=json.loads, default=os.environ["SM_HPS"]) Get familiar tasks: For now, we will only use the unmodified run script, however, a good exercise to get familiar with the framework would be to modify the script so:* You parse the location of the input we provide thourgh "my_inputs" * You read the dummy file inside the run(...) method* You write the content of the file to a new file called "parsed.txt" and save it to the output location * You check in S3 that "parsed.txt" was saved to S3 in your experiment folder under /output/output.tar.gzHINT: you don't need to write or read form S3 explicitly, but rather access the appropriate local location through "arguments" of the run(...) method within your scripts; let SageMaker containers handle the interaction with S3. HINT: you can take a look at the "train_entry_point.py" to see an actual example for a training script. Run the Experiment As we will see, the arguments to the GluonTSFramework run(...) method are almost identical to the train(...) one, however, we additionally specify the required "entry_point" and "inputs", and optionally "wait=False" because we might want to launch multiple jobs async.
###Code
my_experiment, my_job_name = GluonTSFramework.run(
entry_point=str(run_entry_point_path), # additionally required
inputs = my_inputs, # additionally required
sagemaker_session=sagemaker_session,
role=role,
image_uri=docker_image,
base_job_name=base_job_description,
instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
wait=False # optional
)
###Output
_____no_output_____
###Markdown
We can take a look at the training job right away:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{my_job_name}")
###Output
_____no_output_____
###Markdown
And again, check out the corresponding S3 location:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{my_job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Custom GluonTS version: In case you are modifying GluonTS on your local machine and want to run experiments on your custom version, just import GluonTS and define: >gluont_ts_path = Path(gluonts.__path__[0]) >gluont_ts_requirements_path = gluont_ts_path.parent.parent / "requirements" / "requirements.txt" and change the dependencies argument of run(...) or train(...) the following way: > dependencies=[gluont_ts_requirements_path, gluont_ts_path] Cleanup Lets just clean up the temporary directory:
###Code
temp_dir.cleanup()
###Output
_____no_output_____
###Markdown
GluonTS SageMaker SDK Tutorial ***This notebook is meant to be uploaded to a SageMaker notebook instance and executed there. As a kernel choose `conda_mxnet_p36`*** ***In this how-to tutorial we will train a SimpleFeedForwardEstimator on the m4_hourly dataset on AWS SageMaker using the GluonTSFramework, and later review its performance. At the very end you will see how to launch your custom training script.*** ***In the end you should know how to train any GluonEstimator on any Dataset on SageMaker using the GluonTSFramework train(...) method, and how to run your own script using the run(...) method.*** Notebook Setup Currently, *GluonTSFramework* is only available through the master branch of *GluonTS*, so we install it with the required dependencies first:
###Code
!pip install --upgrade mxnet==1.6 git+https://github.com/awslabs/gluon-ts.git#egg=gluonts[dev]
# Third-party requirements
import boto3
import sagemaker
from pathlib import Path
import tempfile
# First-party requirements
from gluonts.nursery.sagemaker_sdk.estimator import GluonTSFramework
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.trainer import Trainer
###Output
_____no_output_____
###Markdown
Credentials & Configuration Since we are executing this tutorial on a SageMaker notebook instance, many parameters that we would usually need to predefine manually we can just retrieve from the environment. In order to highlight how you would have to set these parameters when you are executing a notebook like this on you local machine take a look at the cell output:
###Code
temp_session = boto3.session.Session()
temp_sagemaker_session = sagemaker.session.Session(boto_session=temp_session)
bucket_name = f"s3://{temp_sagemaker_session.default_bucket()}"
print(f"bucket_name = '{bucket_name}'")
region_name = temp_session.region_name
print(f"region_name = '{region_name}'")
profile_name = temp_session.profile_name
print(f"profile_name = '{profile_name}'")
iam_role = sagemaker.get_execution_role()
print(f"iam_role = '{iam_role}'")
###Output
_____no_output_____
###Markdown
Remember that in order to be able to use the profile 'defult' (or any other profile) on your local machine you must have correctly set up your [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html). Additionally, the specified bucket needs to be actually present in the specified region. With this out of the way, we can continue as if we had set the above variables manually. Experimental Setup Experiment directory First, we should define the *S3 parent folder location* which will later contain the folder with all the data generated during the experiment (model artifacts, custom scripts, dependencies etc.). I you choose to use a subfolder for your experiments (like we do here) the folder does not have to exist yet, but it's name must satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*. If not specified, the default bucket of the specified region itself will be used.
###Code
experiment_parent_dir = bucket_name + "/my-sagemaker-experiments"
print(f"experiment_parent_dir = '{experiment_parent_dir}'")
###Output
_____no_output_____
###Markdown
SageMaker session Next, we need to create a sagemaker session in our region using a [*boto3*](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.htmlusing-boto-3) session with our credentials (profile).
###Code
boto_session = boto3.session.Session(profile_name=profile_name, region_name=region_name)
sagemaker_session = sagemaker.session.Session(boto_session=boto_session)
###Output
_____no_output_____
###Markdown
AWS IAM role We also need to provide an AWS [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) role, with which to access the resources on our account.
###Code
role = iam_role
###Output
_____no_output_____
###Markdown
Training image & instance type We can just use one of the prebuilt SageMaker [ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-basics.html) images and install the gluonts version we prefer dynamically though the 'requirements.txt'.
###Code
general_instance_type = "cpu"
# instance_type = "gpu" # alternative
###Output
_____no_output_____
###Markdown
Depending our *general_instance_type* choice we will have to select an appropriate concrete 'instance type':
###Code
instance_type = "ml.c5.xlarge" if general_instance_type == "cpu" else "ml.p2.xlarge"
###Output
_____no_output_____
###Markdown
and an appropriate prebuilt mxnet image (we will take the training images here):
###Code
if general_instance_type == "cpu":
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-cpu-py36-ubuntu16.04"
else:
docker_image = f"763104351884.dkr.ecr.{region_name}.amazonaws.com/mxnet-training:1.6.0-gpu-py36-cu101-ubuntu16.04"
print(f"docker_image = '{docker_image}'")
###Output
_____no_output_____
###Markdown
Base job description We can give our training job a base name that lets us easily identify experiments of the same type. It has to satisfy the regular expression pattern: \^\[a-zA-Z0-9\](-\*\[a-zA-Z0-9\])\*
###Code
base_job_description = "my-sagemaker-experiment-intro"
###Output
_____no_output_____
###Markdown
Dataset Here we have two choices; we can either pick a built in dataset provided by GluonTS or any dataset in the GluonTS dataset format located on S3, which would look like this: >dataset_name> |---> train> | |--> data.json> |---> test> | |--> data.json> |---> metadata.json Since we haven't uploaded any, lets pick a provided one for now. The following datasets are available:
###Code
print(dataset_recipes.keys())
###Output
_____no_output_____
###Markdown
How about "m4_hourly"?:
###Code
dataset_name = "m4_hourly"
# dataset_name = "s3://<your-custom-dataset-location>" # if using a custom dataset
###Output
_____no_output_____
###Markdown
We will need to know the *prediction_length* and *freq* of the dataset to define our SimpleFeedForwardEstimator, so lets keep track of them:
###Code
freq = dataset_recipes[dataset_name].keywords["pandas_freq"]
prediction_length = dataset_recipes[dataset_name].keywords["prediction_length"]
###Output
_____no_output_____
###Markdown
Requirements and Dependencies We will additionally have to specify a 'requirements.txt' file where we specify which GluonTS version we want to use. Here we will create a temporary requirements file, but you can just have a 'requirements.txt' file in the folder where you launch your experiments.
###Code
requirements_dot_txt_file_name = "requirements.txt"
requirements_dot_txt_file_content = "git+https://github.com/awslabs/gluon-ts.git"
# only using temporary directory for demonstration
temp_dir = tempfile.TemporaryDirectory()
temp_dir_path = Path(temp_dir.name)
# create the requirements.txt file
with open(temp_dir_path / requirements_dot_txt_file_name, "w") as req_file: # has to be called requirements.txt
req_file.write(requirements_dot_txt_file_content)
my_requirements_txt_file_path = str(temp_dir_path / requirements_dot_txt_file_name)
print(f"my_requirements_txt_file_path = '{my_requirements_txt_file_path}'")
###Output
_____no_output_____
###Markdown
Define the Estimator Now we define the Estimator we want to train, which can be any GluonEstimator (except ) with any hyperparameter.
###Code
my_estimator = SimpleFeedForwardEstimator(
prediction_length=prediction_length,
freq=freq,
trainer=Trainer(ctx=general_instance_type, epochs=5) # optional
)
###Output
_____no_output_____
###Markdown
Launch the Experiment
###Code
my_experiment = GluonTSFramework(
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
)
###Output
_____no_output_____
###Markdown
And finally we call the *train* method to train our estimator, where we just specify our dataset and estimator:
###Code
results = my_experiment.train(dataset=dataset_name, estimator=my_estimator)
###Output
_____no_output_____
###Markdown
Review the Results The 'train(...)' function returnes a TrainResult which consists of the following fields:
###Code
print(results._fields)
###Output
_____no_output_____
###Markdown
So we could use the predictor straight away to predict on some additional data if we would like. We can also inspect our training history and monitored metrics (like resource consumption or epoch loss) on SageMaker under "Training/Training jobs" here:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{results.job_name}")
###Output
_____no_output_____
###Markdown
Or take a look at the metrics right here:
###Code
results.metrics[0]
###Output
_____no_output_____
###Markdown
Or head to our bucket to download the model artifacts:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{results.job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Run a custom python script There process to run a custom python script is not much different, however, you will have to adapt your usual python script to particularities of the SageMaker.
###Code
import os
import gluonts
import s3fs
###Output
_____no_output_____
###Markdown
Writing a custom script Your custom script has to adhere to a rough format, for this reason we provide the "run_entry_point.py" script with GluonTS under:
###Code
run_entry_point_path = (
Path(os.path.dirname(gluonts.__file__))
/ "nursery"
/ "sagemaker_sdk"
/ "entry_point_scripts"
/ "run_entry_point.py"
)
###Output
_____no_output_____
###Markdown
Lets take a look:
###Code
with open(run_entry_point_path, 'r') as script:
print(script.read())
###Output
_____no_output_____
###Markdown
As we can see, there is a *run* method, whithin which we are supposed to write our custom code. Additionally, at the bottom we might need to parse additional arguments that we provide for example through the "inputs" parameter of the GluonTSFramework.run(...) method. The "inputs" parameter cannot be empty, due to the restrictions of the Framework baseclass of the GluonTSFramework, however, you can pass an empty file located on S3 as dummy input. Lets define a path for the dummy file:
###Code
dummy_s3_file_path = bucket_name + "/dummy_1234"
print(f"dummy_s3_file_path = '{dummy_s3_file_path}'")
###Output
_____no_output_____
###Markdown
Lets create the S3 file (if the file already exists you will have to set overwrite to 'True', or choose a different path for the dummy file):
###Code
overwrite = False
s3 = s3fs.S3FileSystem(anon=False) # uses default credentials
if not(s3.exists(dummy_s3_file_path)) or overwrite:
with s3.open(dummy_s3_file_path, 'w') as f:
f.write("This is a dummy file.")
print("Dummy file created!")
else:
print("No dummy file created!")
my_inputs = {'my_dataset_name': sagemaker.s3_input(dummy_s3_file_path, content_type='application/json')}
###Output
_____no_output_____
###Markdown
If we were to pass a dataset location as input as defined above, we would have to parse the location of that dataset (which will be uploaded into the container environment) for example like this: > parser.add_argument('--my_fancy_dataset', type=str, default=os.environ['SM_CHANNEL_MY_DATASET_NAME']) Prepending "SM_CHANNEL_" and converting the name to all caps is necessary. Within the *run(...)* method the location will be accessible by: > arguments.my_fancy_dataset Any additional "hyperparameter" you provide to *GluonTSFramework.run(...)* are already parsed by: >parser.add_argument("--sm-hps", type=json.loads, default=os.environ["SM_HPS"]) Get familiar tasks: For now, we will only use the unmodified run script, however, a good exercise to get familiar with the framework would be to modify the script so:* You parse the location of the input we provide thourgh "my_inputs" * You read the dummy file inside the run(...) method* You write the content of the file to a new file called "parsed.txt" and save it to the output location * You check in S3 that "parsed.txt" was saved to S3 in your experiment folder under /output/output.tar.gzHINT: you don't need to write or read form S3 explicitly, but rather access the appropriate local location through "arguments" of the run(...) method within your scripts; let SageMaker containers handle the interaction with S3. HINT: you can take a look at the "train_entry_point.py" to see an actual example for a training script. Run the Experiment As we will see, the arguments to the GluonTSFramework run(...) method are almost identical to the train(...) one, however, we additionally specify the required "entry_point" and "inputs", and optionally "wait=False" because we might want to launch multiple jobs async.
###Code
my_experiment, my_job_name = GluonTSFramework.run(
entry_point=str(run_entry_point_path), # additionally required
inputs = my_inputs, # additionally required
sagemaker_session=sagemaker_session,
role=role,
image_name=docker_image,
base_job_name=base_job_description,
train_instance_type=instance_type,
dependencies=[my_requirements_txt_file_path],
output_path=experiment_parent_dir, # optional, but recommended
code_location=experiment_parent_dir, # optional, but recommended
wait=False # optional
)
###Output
_____no_output_____
###Markdown
We can take a look at the training job right away:
###Code
print(f"https://{region_name}.console.aws.amazon.com/sagemaker/home?region={region_name}#/jobs/{my_job_name}")
###Output
_____no_output_____
###Markdown
And again, check out the corresponding S3 location:
###Code
print(f"https://s3.console.aws.amazon.com/s3/buckets/{experiment_parent_dir[5:]}/{my_job_name}/?region={region_name}&tab=overview")
###Output
_____no_output_____
###Markdown
Custom GluonTS version: In case you are modifying GluonTS on your local machine and want to run experiments on your custom version, just import GluonTS and define: >gluont_ts_path = Path(gluonts.__path__[0]) >gluont_ts_requirements_path = gluont_ts_path.parent.parent / "requirements" / "requirements.txt" and change the dependencies argument of run(...) or train(...) the following way: > dependencies=[gluont_ts_requirements_path, gluont_ts_path] Cleanup Lets just clean up the temporary directory:
###Code
temp_dir.cleanup()
###Output
_____no_output_____ |
Baylor-Libraries-Sentiment-Per-Line.ipynb | ###Markdown
**Best viewed via Jupyter nbviewer**: https://nbviewer.jupyter.org/github/Josh-Been/Sentiment-Per-Line/blob/master/Baylor-Libraries-Sentiment-Per-Line.ipynb?flush_cache=true Sentiment-Per-LineSentiment values per line in text file. Results downloaded to structured csv table.Baylor University Libraries: Implementation of VaderSentimentVADER (Valence Aware Dictionary and sEntiment Reasoner)This Python application was created by the Baylor University Libraries to assist researchers to apply sentiment to text files using the VaderSentiment library. Baylor University has no connection with the creator of the VaderSentiment library. This is merely a browsable form to access the library.For documentation of the VaderSentiment library, navigate to https://github.com/cjhutto/vaderSentiment.This application will create a comma delimited spreadsheet in the same directory as the selected settings file. The spreadsheet will repeat each line in the first column and write the assigned composite sentiment in the second column. **First**, ensure Anaconda 2.7 is installed on your system. If it is not, head to https://www.anaconda.com/download/ and install. Then continue with the next step. **Second**, launch Anaconda Navigator. After installing in the previous step, this will be in the Programs menu on Windows and in the Applications directory on Mac. **Third**, launch the Jupyter Notebook application. Anaconda Navigator has a link directly to Jupyter Notebook. **Fourth**, download the Jupyter Notebook file https://raw.githubusercontent.com/Josh-Been/Sentiment-Per-Line/master/Baylor-Libraries-Sentiment-Per-Line.ipynb to your computer. In the Jupyter browser tab that opened in the previous step, click the Upload button and browse for the saved Jupyter Notebook file.Up to this point you have been reading an HTML version of this Notebook.Now switch to the interactive version in Jupyter. **Fifth**, ensure you have the VaderSentiment library installed. If you are confident you already installed VaderSentiment, skip ahead of this step. Otherwise, put the cursor in the box below and click the 'run cell, select below' button at the top of this notebook.
###Code
!pip install vaderSentiment
###Output
_____no_output_____
###Markdown
**Sixth**, run the Sentiment Per Line tool by placing the cursor in the box below and clicking the 'run cell, select below' button at the top of this notebook.After this runs, you should see the application appear. If you do not see it, check the minimized applications.
###Code
#################
#Sentiment values per line in text file. Results downloaded to structured csv table.
#Baylor University Libraries: Implementation of VaderSentiment
#
#This Python application was created by the Baylor University Libraries to assist researchers to apply sentiment to text files using the VaderSentiment library. Baylor University has no connection with the creator of the VaderSentiment library. This is merely a browsable form to access the library. For documentation of the VaderSentiment library, navigate to https://github.com/cjhutto/vaderSentiment.
#VADER (Valence Aware Dictionary and sEntiment Reasoner)
#This application will create a comma delimited spreadsheet in the same directory as the selected settings file. The spreadsheet will repeat each line in the first column and write the assigned composite sentiment in the second column.
#
#__Steps__
#(1) Ensure Python 2.7 is installed on your computer. The advised package is Anaconda Python, available here https://www.anaconda.com/download
#(2) Ensure that you have the proper Python libraries to run the application. This requires the standard Python libraries provided by the Anaconda2 distribution, as well as the PIL (https://anaconda.org/anaconda/pil) and VaderSentiment libraries (pip install vaderSentiment).\n')
#(3) Click the Browse for Text File to Run Vader Sentiment button.
#(4) This will create a comma delimited spreadsheet in the same directory as the selected settings file.
#
# Licensed under the MIT License
# Under production
#
# Developed using Python Anaconda2, 64 bit
# Dependencies not included in Anaconda2: VaderSentiment
# Additional dependencies: Tkinter, operator, string, webbrowser, os, subprocess
#################
import os, operator, string, webbrowser, os, subprocess
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from Tkinter import *
from tkFileDialog import askopenfilename
from tkFileDialog import asksaveasfile
def entry_form():
root = Tk()
root.title("Baylor University Libraries: VaderSentiment Line Analyzer")
# root.wm_iconbitmap('ico.ico')
txtHeader = Text(root, height=3, width=48)
txtHeader.pack()
txtHeader.insert(END, "VaderSentiment Line Analyzer\nDeveloped by Baylor University Libraries\nContact [email protected] for assistance\n")
separator = Frame(height=0, bd=1, relief=SUNKEN)
separator.pack(fill=X, padx=5, pady=5)
def help_file():
f=open('helpfile.txt', 'w')
f.write('Sentiment values per line in text file. Results downloaded to structured csv table.\n')
f.write('Baylor University Libraries: Implementation of VaderSentiment\n\n')
f.write('This Python application was created by the Baylor University Libraries to assist researchers to apply sentiment to text files using the VaderSentiment library. Baylor University has no connection with the creator of the VaderSentiment library. This is merely a browsable form to access the library. For documentation of the VaderSentiment library, navigate to https://github.com/cjhutto/vaderSentiment.\n')
f.write('VADER (Valence Aware Dictionary and sEntiment Reasoner)\n')
f.write('This application will create a comma delimited spreadsheet in the same directory as the selected settings file. The spreadsheet will repeat each line in the first column and write the assigned composite sentiment in the second column.\n\n')
f.write('__Steps__\n')
f.write('(1) Ensure Python 2.7 is installed on your computer. The advised package is Anaconda Python, available here https://www.anaconda.com/download\n')
f.write('(2) Ensure that you have the proper Python libraries to run the application. This requires the standard Python libraries provided by the Anaconda2 distribution, as well as the PIL (https://anaconda.org/anaconda/pil) and VaderSentiment libraries (pip install vaderSentiment).')
f.write('(3) Click the Browse for Text File to Run Vader Sentiment button.\n')
f.write('(4) This will create a comma delimited spreadsheet in the same directory as the selected settings file.\n')
f.close()
launch(f.name)
def launch(url):
if os.name == 'nt':
command=webbrowser.open(url,new=2)
elif 'http:' in url or 'https:' in url:
command=webbrowser.get().open(url,new=2)
else:
subprocess.call(['open', '-a', 'TextEdit', url])
def destroy():
root.update()
root.destroy()
def strip_non_ascii(string):
''' Returns the string without non ASCII characters'''
string = string.replace('\n','')
string = string.replace(',','')
stripped = (c for c in string if 0 < ord(c) < 127)
return ''.join(stripped)
def processVader(txt_file):
analyzer = SentimentIntensityAnalyzer()
lines = txt_file
i = 0
while i >= 0:
print i
if not os.path.isfile(lines.replace('.csv',str(i) + 'Vader.csv')):
out_file = lines.replace('.csv',str(i) + 'Vader.csv')
break
if not os.path.isfile(lines.replace('.txt',str(i) + 'Vader.csv')):
out_file = lines.replace('.txt',str(i) + 'Vader.csv')
break
i+=1
fout = open(out_file, 'w')
with open(lines) as f:
for line in f:
line = line.replace(',','')
vs = analyzer.polarity_scores(line)
fout.write(line.replace('\n','') + ',' + str(vs['compound']) + '\n')
fout.close()
def callback():
txt_file = askopenfilename(defaultextension='.txt', filetypes=(('text', '*.txt'),('comma separated', '*.csv')))
if txt_file != '':
try:
txtSelected.set('Processing')
processVader(txt_file)
txtSelected.set('Job Successful!')
root.update()
except:
txtSelected.set('! Problem Processing !')
Button(root, text='Browse for Text File (.txt) to Process Sentiment', fg='blue', command=callback).pack(fill=X)
txtSelected = StringVar()
Label(root, textvariable=txtSelected, fg='white', bg='black').pack()
txtSelected.set('Idle...')
# create a toplevel menu
menubar = Menu(root)
# display the menu
root.config(menu=menubar)
# create a pulldown menu, and add it to the menu bar
filemenu = Menu(menubar, tearoff=0)
filemenu.add_command(label="Digital Scholarship @ Baylor", command=lambda : launch('http://blogs.baylor.edu/digitalscholarship/'))
filemenu.add_command(label='Vader Sentiment', command=lambda : launch('https://github.com/cjhutto/vaderSentiment'))
menubar.add_cascade(label="Links", menu=filemenu)
helpmenu = Menu(menubar, tearoff=0)
helpmenu.add_command(label="Documentation", command=help_file)
helpmenu.add_command(label="Contact Author", command=lambda : launch('http://researchguides.baylor.edu/prf.php?account_id=144176'))
menubar.add_cascade(label="About", menu=helpmenu)
menubar.add_command(label="Quit", command=destroy)
# display the menu
root.config(menu=menubar)
mainloop()
def main():
entry_form()
if __name__ == "__main__":
main()
###Output
_____no_output_____ |
scheduler/notebooks/figures/evaluation/cluster_sweep.ipynb | ###Markdown
Makespan policies In this notebook, we compare various heterogeneity-agnostic and heterogeneity-aware makespan policies, with and without space sharing. Import statements
###Code
from plotting_utils import *
from utils import get_logfile_paths, average_jct_fn, makespan_fn, prune
###Output
_____no_output_____
###Markdown
Get list of relevant logfiles and define label mapping
###Code
static_logfile_paths = sorted(
get_logfile_paths(
"/future/u/deepakn/gavel/logs/cluster_sweep_static_jobs_final/", static_trace=True))
continuous_logfile_paths = sorted(
get_logfile_paths(
"/future/u/deepakn/gavel/logs/cluster_sweep_continuous_jobs_final/", static_trace=False))
###Output
_____no_output_____
###Markdown
Plotting functions
###Code
def plot_metric_vs_num_total_jobs(logfile_paths,
clusters,
seeds,
metric_fn,
metric_label,
xmax=None, ymax=None,
output_filename=None):
plt.figure(figsize=(8, 3))
ax = plt.subplot2grid((1, 1), (0, 0), colspan=1)
data = {"resource_fraction": [], "metric": [], "seed": []}
for (v100s, k80s) in clusters:
p100s = 0
metrics = {}
for policy in ["min_total_duration", "min_total_duration_perf"]:
relevant_logfile_paths = list(reversed(prune(
logfile_paths, v100s, p100s, k80s, policy)))
metrics[policy] = {
x[2]: metric_fn(x[1])
for x in relevant_logfile_paths}
metrics_to_plot = [metrics["min_total_duration"][seed] / metrics["min_total_duration_perf"][seed]
for seed in seeds]
import pandas as pd
data["resource_fraction"] += [float(v100s) / (float(v100s) + float(k80s))
for seed in seeds]
data["metric"] += metrics_to_plot
data["seed"] += seeds
import pandas as pd
df = pd.DataFrame(data)
print(df.groupby(["resource_fraction", "seed"]).mean())
sns.lineplot(x='resource_fraction', y='metric',
data=data, ci='sd',
markers=True, marker='o')
ax.set_xlabel("Fraction of V100s")
ax.set_ylabel(metric_label)
ax.set_xlim([0.05, 1.05])
ax.set_xticks([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
ax.set_ylim([0, ymax])
sns.despine()
leg = plt.legend(frameon=False)
bb = leg.get_bbox_to_anchor().inverse_transformed(ax.transAxes)
bb.y0 += 0.22
bb.y1 += 0.22
leg.set_bbox_to_anchor(bb, transform=ax.transAxes)
if output_filename is not None:
with PdfPages(output_filename) as pdf:
pdf.savefig(bbox_inches='tight')
plt.show()
def plot_metric_vs_inverse_lambda(logfile_paths,
clusters,
seeds,
metric_fn,
metric_label,
xmax=None,
ymax=None,
output_filename=None):
from utils import prune
plt.figure(figsize=(8, 3))
ax = plt.subplot2grid((1, 1), (0, 0), colspan=1)
data = {"input_job_rate": [], "metric": [], "seed": [],
"resource_fraction": []}
for (v100s, k80s) in clusters:
p100s = 0
metrics = {}
for policy in ["max_min_fairness", "max_min_fairness_perf"]:
relevant_logfile_paths = list(reversed(prune(
logfile_paths, v100s, p100s, k80s, policy)))
lambdas = [x[0] for x in relevant_logfile_paths]
input_job_rates = [3600.0 / x for x in lambdas]
metrics[policy] = {
(3600.0 / x[0], x[2]): metric_fn(x[1])
for x in relevant_logfile_paths}
keys = sorted(list(set(metrics[policy].keys())))
metrics_to_plot = []
for key in keys:
if metrics["max_min_fairness"][key] is None or \
metrics["max_min_fairness_perf"][key] is None:
print(key, metrics["max_min_fairness"][key], metrics["max_min_fairness_perf"][key])
metrics_to_plot.append(None)
else:
metrics_to_plot.append(
metrics["max_min_fairness"][key] /
metrics["max_min_fairness_perf"][key])
import pandas as pd
data["resource_fraction"] += [
"Frac. of V100s = %.1f" % (float(v100s) / (float(v100s) + float(k80s)))
for key in keys]
data["metric"] += metrics_to_plot
data["input_job_rate"] += [key[0] for key in keys]
data["seed"] += [key[1] for key in keys]
df = pd.DataFrame(data)
print(df.groupby(["input_job_rate", "resource_fraction", "seed"]).mean())
sns.lineplot(x='input_job_rate', y='metric', style='resource_fraction',
hue='resource_fraction',
data=data, ci='sd',
markers=True)
ax.set_xlabel("Input job rate (jobs/hr)")
ax.set_ylabel(metric_label)
ax.set_xlim([0, xmax])
ax.set_ylim([0, ymax])
sns.despine()
leg = plt.legend(loc='upper left', frameon=False)
bb = leg.get_bbox_to_anchor().inverse_transformed(ax.transAxes)
bb.y0 += 0.22
bb.y1 += 0.22
leg.set_bbox_to_anchor(bb, transform=ax.transAxes)
if output_filename is not None:
with PdfPages(output_filename) as pdf:
pdf.savefig(bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Plot makespan improvement vs. fraction of V100s
###Code
plot_metric_vs_num_total_jobs(
static_logfile_paths,
clusters=[(100, 0), (90, 10), (80, 20), (70, 30), (60, 40),
(50, 50), (40, 60), (30, 70), (20, 80), (10, 90)],
seeds=[0, 1, 2],
metric_fn=makespan_fn,
metric_label="Makespan reduction",
xmax=None,
ymax=None,
output_filename="cluster_sweep/makespan.pdf"
)
plot_metric_vs_inverse_lambda(
continuous_logfile_paths,
clusters=[(100, 0), (50, 50), (10, 90)],
seeds=[0, 1, 2],
metric_fn=average_jct_fn,
metric_label="Average JCT\nreduction",
xmax=None,
ymax=None,
output_filename="cluster_sweep/las.pdf")
###Output
(2.799999999377778, 0) None None
(2.799999999377778, 1) None None
(2.799999999377778, 2) None None
(3.2, 0) None None
(3.2, 1) None None
(3.2, 2) None None
(3.6, 0) None None
(3.6, 1) None None
(3.6, 2) None None
(4.0, 0) None None
(4.0, 1) None None
(4.0, 2) None None
(4.400000000977777, 0) None None
(4.400000000977777, 1) None None
(4.400000000977777, 2) None None
metric
input_job_rate resource_fraction seed
0.4 Frac. of V100s = 0.1 0 1.724436
1 1.550913
2 1.698038
Frac. of V100s = 0.5 0 1.482283
1 1.443859
... ...
4.4 Frac. of V100s = 0.5 1 3.216701
2 2.755792
Frac. of V100s = 1.0 0 1.000000
1 1.000000
2 1.000000
[99 rows x 1 columns]
|
lab/Lab 07 - Bagging Decision Trees.ipynb | ###Markdown
Lab 07 - Bagging Decision TreesIn the previous lab we discussed the [bias-variance tradeoff](https://github.com/GreenGilad/IML.HUJI/blob/master/lab/Lab%2007%20-%20Bias-Variance%20Trade-off.ipynb) and saw how:- The less complex a model is the higher is its variance and lower is its bias. We say in this case that the model is underfitted.- The more complex a model is the higher is its bias and lower is its variance. We say in this case that the model is overfitted.In this lab we will use the power of ensemble methods to fit a set of models, each with a low complexity, to achieve better performances while avoiding overfitting. We use the hypothesis class of decision trees and *bag* multiple trees into a single ensemble.
###Code
import sys
sys.path.append("../")
from utils import *
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.utils import resample
import matplotlib.pyplot as plt
import itertools
symbols = np.array(["circle", "x"])
###Output
_____no_output_____
###Markdown
???????????????????????????
###Code
d, n_train, n_test = 8, 3000, 500
X, y = create_data_bagging_utils(d=d, n_samples = n_train + n_test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=n_test, random_state=42)
go.Figure(data=go.Scatter(x=X[:,0], y=X[:,1], mode="markers", showlegend=False,
marker=dict(color=y, symbol=symbols[y], colorscale=[custom[0], custom[-1]])),
layout=go.Layout(title=rf"$\textbf{{(1) Tree Dataset - True Depth {d}}}$",
xaxis_title=r"$x_1$",
yaxis_title=r"$x_2$"))
###Output
_____no_output_____
###Markdown
Creation of bootstrap samplesNow, after we understand better the different datasets we're creating for the bootstrap algorithm, let's see the different models (trees) that are created from these datasets. We'll take 2 bootstrap datasets, fit a decision tree of depth of 2 to each dataset and plot the trees
###Code
_, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (10,2), dpi=300)
for i in range(2):
idx = resample(range(len(X_train)), replace = True, n_samples = len(X_train))
fit = DecisionTreeClassifier(max_depth=2).fit(X_train[idx], y_train[idx])
plot_tree(fit, filled = True, impurity=False, class_names=["O", "X"], ax = axes[i])
plt.show()
###Output
_____no_output_____
###Markdown
Now, Let's create 11 bootstrap datasets from our train data, each with 1000 samples.Each bootstrap dataset is built by choosing samples from the train data randomly with replcement, that means it is built in the next way:1. choose a sample randomly from the train data and add it to the bootstrap dataset2. keep the sample in the train data so that it can be re-selected to the same bootstrap dataset (and of course to the other bootstrap datasets)
###Code
bootstrap_sets = [set(resample(range(len(X_train)), replace = True, n_samples = len(X_train))) for _ in range(100)]
overlap = [len(bootstrap_sets[i].intersection(bootstrap_sets[j])) / len(X_train)
for i, j in (itertools.combinations(range(len(bootstrap_sets)), 2))]
print(f"Average overlap between bootstrap samples of {round(100*np.mean(overlap), 2)}% " +
f"with variance of {round(np.var(overlap), 7)}")
###Output
Average overlap between bootstrap samples of 39.87% with variance of 3.92e-05
###Markdown
Construct trees
###Code
from multiprocessing import Pool
def fit_bootstrap_tree(depth, X, y):
idx = resample(range(len(X)), replace = True, n_samples = len(X))
return DecisionTreeClassifier(max_depth=depth).fit(X[idx], y[idx])
trees = [fit_bootstrap_tree(2, X_train, y_train) for _ in range(50)]
preds = np.array([t.predict(X_test) for t in trees])
np.mean((np.cumsum(preds, axis=0) >= len(trees)/2) == 1, axis=1)
num_of_trees = 400 # number of weak learners
iterations = np.arange(0, num_of_trees, 5)
depth = 3
def train_bootstrap():
all_indexes = np.arange(len(iterations))
train_errors = []
test_errors = []
train_var = []
test_var = []
trees = np.zeros(shape = num_of_trees, dtype=object)
for t in range(num_of_trees):
# resample new dataset(with replacement)
indexes = resample(all_indexes, replace = True, n_samples = 1000)
new_x_train, new_y_train = iterations[indexes], tags_train[indexes]
ensemble_learner = tree.DecisionTreeClassifier(max_depth=depth)
ensemble_learner.fit(new_x_train, new_y_train)
trees[t] = ensemble_learner
for T in iterations:
# predicting with weak leaners (small trees)
train_pred = np.sign(np.sum([trees[t].predict(iterations) for t in range(T)], axis = 0))
train_errors.append (1 - np.mean(train_pred == tags_train))
train_var.append(train_pred.var())
test_pred = np.sign(np.sum([trees[t].predict(samples_test) for t in range(T)], axis = 0))
test_errors.append (1 - np.mean(test_pred == tags_test))
test_var.append(test_pred.var())
return train_errors, test_errors, train_var, test_var, trees
train_errors, test_errors, train_var, test_var, trees = train_bootstrap()
# line - train error
train_error = train_errors[-1]
# lines - test errors
single_stump_test_error = test_errors[0]
deep_tree_test_error = 1 - np.mean(tree.DecisionTreeClassifier(max_depth=250).fit(iterations, tags_train).predict(samples_test) == tags_test)
# Form grid of points to use for plotting decision boundaries
lims = np.array([samples_train.min(axis=0), samples_train.max(axis=0)]).T + np.array([-.2, .2])
xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# Retrieve model train error at each iteration of fitting
staged_scores = test_errors
# Predict labels of grid points at each iteration of fitting
staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
Create animation frames
frames = []
for i in range(num_of_trees):
frames.append(go.Frame(
data=[
# Scatter of sample weights
go.Scatter(x=samples_train[:,0], y= samples_train[:,1], mode='markers', showlegend=False, marker=dict(color=tags_train, colorscale=class_colors(2),
size=np.maximum(2, np.ones(8*5))),
xaxis="x", yaxis="y"),
# Staged decision surface
go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=trees[i].predict(np.vstack([xx, yy]).T)),
mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# Scatter of train samples with true class
go.Scatter(x=samples_train[:,0], y=samples_train[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
marker=dict(color=tags_train, colorscale=class_colors(2), symbol=class_symbols[tags_train])),
# Scatter of staged score
go.Scatter(x=list(range(i)), y=test_errors[:i], mode='lines+markers', showlegend=False, marker_color="black",
xaxis="x3", yaxis="y3")
],
layout = go.Layout(title = rf"hh"),
traces=[0, 1, 2, 3]))
fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
subplot_titles=(r"$\hh",
r"$\hh"),
specs=[[{}, {}], [{"colspan": 2}, None]])\
.add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
.update(frames = frames)\
.update_layout(title=frames[0].layout.title,
updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
width=600, height=550, margin=dict(t=100))\
.update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
.update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
fig.show()
###Output
_____no_output_____
###Markdown
Lab 07 - Bagging Decision TreesIn the previous lab we discussed the [bias-variance tradeoff](https://github.com/GreenGilad/IML.HUJI/blob/master/lab/Lab%2007%20-%20Bias-Variance%20Trade-off.ipynb) and saw how:- The less complex a model is the higher is its variance and lower is its bias. We say in this case that the model is underfitted.- The more complex a model is the higher is its bias and lower is its variance. We say in this case that the model is overfitted.In this lab we will use the power of ensemble methods to fit a set of models, each with a low complexity, to achieve better performances while avoiding overfitting. We use the hypothesis class of decision trees and *bag* multiple trees into a single ensemble.
###Code
import sys
sys.path.append("../")
from utils import *
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.utils import resample
import matplotlib.pyplot as plt
import itertools
symbols = np.array(["circle", "x"])
###Output
_____no_output_____
###Markdown
???????????????????????????
###Code
d, n_train, n_test = 8, 3000, 500
X, y = create_data_bagging_utils(d=d, n_samples = n_train + n_test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=n_test, random_state=42)
go.Figure(data=go.Scatter(x=X[:,0], y=X[:,1], mode="markers", showlegend=False,
marker=dict(color=y, symbol=symbols[y], colorscale=[custom[0], custom[-1]])),
layout=go.Layout(title=rf"$\textbf{{(1) Tree Dataset - True Depth {d}}}$",
xaxis_title=r"$x_1$",
yaxis_title=r"$x_2$"))
###Output
_____no_output_____
###Markdown
Creation of bootstrap samplesNow, after we understand better the different datasets we're creating for the bootstrap algorithm, let's see the different models (trees) that are created from these datasets. We'll take 2 bootstrap datasets, fit a decision tree of depth of 2 to each dataset and plot the trees
###Code
_, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (10,2), dpi=300)
for i in range(2):
idx = resample(range(len(X_train)), replace = True, n_samples = len(X_train))
fit = DecisionTreeClassifier(max_depth=2).fit(X_train[idx], y_train[idx])
plot_tree(fit, filled = True, impurity=False, class_names=["O", "X"], ax = axes[i])
plt.show()
###Output
_____no_output_____
###Markdown
Now, Let's create 11 bootstrap datasets from our train data, each with 1000 samples.Each bootstrap dataset is built by choosing samples from the train data randomly with replcement, that means it is built in the next way:1. choose a sample randomly from the train data and add it to the bootstrap dataset2. keep the sample in the train data so that it can be re-selected to the same bootstrap dataset (and of course to the other bootstrap datasets)
###Code
bootstrap_sets = [set(resample(range(len(X_train)), replace = True, n_samples = len(X_train))) for _ in range(100)]
overlap = [len(bootstrap_sets[i].intersection(bootstrap_sets[j])) / len(X_train)
for i, j in (itertools.combinations(range(len(bootstrap_sets)), 2))]
print(f"Average overlap between bootstrap samples of {round(100*np.mean(overlap), 2)}% " +
f"with variance of {round(np.var(overlap), 7)}")
###Output
Average overlap between bootstrap samples of 40.01% with variance of 4.85e-05
###Markdown
Construct trees
###Code
from multiprocessing import Pool
def fit_bootstrap_tree(depth, X, y):
idx = resample(range(len(X)), replace = True, n_samples = len(X))
return DecisionTreeClassifier(max_depth=depth).fit(X[idx], y[idx])
trees = [fit_bootstrap_tree(2, X_train, y_train) for _ in range(50)]
preds = np.array([t.predict(X_test) for t in trees])
np.mean((np.cumsum(preds, axis=0) >= len(trees)/2) == 1, axis=1)
# num_of_trees = 400 # number of weak learners
# iterations = np.arange(0, num_of_trees, 5)
# depth = 3
# def train_bootstrap():
# all_indexes = np.arange(len(samples_train))
# train_errors = []
# test_errors = []
# train_var = []
# test_var = []
# trees = np.zeros(shape = num_of_trees, dtype=object)
# for t in range(num_of_trees):
# # resample new dataset(with replacement)
# indexes = resample(all_indexes, replace = True, n_samples = 1000)
# new_x_train, new_y_train = samples_train[indexes], tags_train[indexes]
# ensemble_learner = tree.DecisionTreeClassifier(max_depth=depth)
# ensemble_learner.fit(new_x_train, new_y_train)
# trees[t] = ensemble_learner
# for T in iterations:
# # predicting with weak leaners (small trees)
# train_pred = np.sign(np.sum([trees[t].predict(samples_train) for t in range(T)], axis = 0))
# train_errors.append (1 - np.mean(train_pred == tags_train))
# train_var.append(train_pred.var())
# test_pred = np.sign(np.sum([trees[t].predict(samples_test) for t in range(T)], axis = 0))
# test_errors.append (1 - np.mean(test_pred == tags_test))
# test_var.append(test_pred.var())
# return train_errors, test_errors, train_var, test_var, trees
# train_errors, test_errors, train_var, test_var, trees = train_bootstrap()
# # line - train error
# train_error = train_errors[-1]
# # lines - test errors
# single_stump_test_error = test_errors[0]
# deep_tree_test_error = 1 - np.mean(tree.DecisionTreeClassifier(max_depth=250).fit(samples_train, tags_train).predict(samples_test) == tags_test)
# # Form grid of points to use for plotting decision boundaries
# lims = np.array([samples_train.min(axis=0), samples_train.max(axis=0)]).T + np.array([-.2, .2])
# xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# # # Retrieve model train error at each iteration of fitting
# # staged_scores = test_errors
# # # Predict labels of grid points at each iteration of fitting
# # staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# # Create animation frames
# # frames = []
# # for i in range(num_of_trees):
# # frames.append(go.Frame(
# # data=[
# # # Scatter of sample weights
# # go.Scatter(x=samples_train[:,0], y= samples_train[:,1], mode='markers', showlegend=False, marker=dict(color=tags_train, colorscale=class_colors(2),
# # size=np.maximum(2, np.ones(8*5))),
# # xaxis="x", yaxis="y"),
# # # Staged decision surface
# # go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=trees[i].predict(np.vstack([xx, yy]).T)),
# # mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# # # Scatter of train samples with true class
# # go.Scatter(x=samples_train[:,0], y=samples_train[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
# # marker=dict(color=tags_train, colorscale=class_colors(2), symbol=class_symbols[tags_train])),
# # # Scatter of staged score
# # go.Scatter(x=list(range(i)), y=test_errors[:i], mode='lines+markers', showlegend=False, marker_color="black",
# # xaxis="x3", yaxis="y3")
# # ],
# # layout = go.Layout(title = rf"hh"),
# # traces=[0, 1, 2, 3]))
# # fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
# # subplot_titles=(r"$\hh",
# # r"$\hh"),
# # specs=[[{}, {}], [{"colspan": 2}, None]])\
# # .add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
# # .update(frames = frames)\
# # .update_layout(title=frames[0].layout.title,
# # updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
# # width=600, height=550, margin=dict(t=100))\
# # .update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
# # .update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
# # fig.show()
###Output
_____no_output_____
###Markdown
Lab 07 - Bagging Decision TreesIn the previous lab we discussed the [bias-variance tradeoff](https://github.com/GreenGilad/IML.HUJI/blob/master/lab/Lab%2007%20-%20Bias-Variance%20Trade-off.ipynb) and saw how:- The less complex a model is the higher is its variance and lower is its bias. We say in this case that the model is underfitted.- The more complex a model is the higher is its bias and lower is its variance. We say in this case that the model is overfitted.In this lab we will use the power of ensemble methods to fit a set of models, each with a low complexity, to achieve better performances while avoiding overfitting. We use the hypothesis class of decision trees and *bag* multiple trees into a single ensemble.
###Code
import sys
sys.path.append("../")
from utils import *
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.utils import resample
import matplotlib.pyplot as plt
import itertools
symbols = np.array(["circle", "x"])
###Output
_____no_output_____
###Markdown
???????????????????????????
###Code
d, n_train, n_test = 8, 3000, 500
X, y = create_data_bagging_utils(d=d, n_samples = n_train + n_test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=n_test, random_state=42)
go.Figure(data=go.Scatter(x=X[:,0], y=X[:,1], mode="markers", showlegend=False,
marker=dict(color=y, symbol=symbols[y], colorscale=[custom[0], custom[-1]])),
layout=go.Layout(title=rf"$\textbf{{(1) Tree Dataset - True Depth {d}}}$",
xaxis_title=r"$x_1$",
yaxis_title=r"$x_2$"))
###Output
_____no_output_____
###Markdown
Creation of bootstrap samplesNow, after we understand better the different datasets we're creating for the bootstrap algorithm, let's see the different models (trees) that are created from these datasets. We'll take 2 bootstrap datasets, fit a decision tree of depth of 2 to each dataset and plot the trees
###Code
_, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (10,2), dpi=300)
for i in range(2):
idx = resample(range(len(X_train)), replace = True, n_samples = len(X_train))
fit = DecisionTreeClassifier(max_depth=2).fit(X_train[idx], y_train[idx])
plot_tree(fit, filled = True, impurity=False, class_names=["O", "X"], ax = axes[i])
plt.show()
###Output
_____no_output_____
###Markdown
Now, Let's create 11 bootstrap datasets from our train data, each with 1000 samples.Each bootstrap dataset is built by choosing samples from the train data randomly with replcement, that means it is built in the next way:1. choose a sample randomly from the train data and add it to the bootstrap dataset2. keep the sample in the train data so that it can be re-selected to the same bootstrap dataset (and of course to the other bootstrap datasets)
###Code
bootstrap_sets = [set(resample(range(len(X_train)), replace = True, n_samples = len(X_train))) for _ in range(100)]
overlap = [len(bootstrap_sets[i].intersection(bootstrap_sets[j])) / len(X_train)
for i, j in (itertools.combinations(range(len(bootstrap_sets)), 2))]
print(f"Average overlap between bootstrap samples of {round(100*np.mean(overlap), 2)}% " +
f"with variance of {round(np.var(overlap), 7)}")
###Output
Average overlap between bootstrap samples of 40.01% with variance of 4.85e-05
###Markdown
Construct trees
###Code
from multiprocessing import Pool
def fit_bootstrap_tree(depth, X, y):
idx = resample(range(len(X)), replace = True, n_samples = len(X))
return DecisionTreeClassifier(max_depth=depth).fit(X[idx], y[idx])
trees = [fit_bootstrap_tree(2, X_train, y_train) for _ in range(50)]
preds = np.array([t.predict(X_test) for t in trees])
np.mean((np.cumsum(preds, axis=0) >= len(trees)/2) == 1, axis=1)
# num_of_trees = 400 # number of weak learners
# iterations = np.arange(0, num_of_trees, 5)
# depth = 3
# def train_bootstrap():
# all_indexes = np.arange(len(samples_train))
# train_errors = []
# test_errors = []
# train_var = []
# test_var = []
# trees = np.zeros(shape = num_of_trees, dtype=object)
# for t in range(num_of_trees):
# # resample new dataset(with replacement)
# indexes = resample(all_indexes, replace = True, n_samples = 1000)
# new_x_train, new_y_train = samples_train[indexes], tags_train[indexes]
# ensemble_learner = tree.DecisionTreeClassifier(max_depth=depth)
# ensemble_learner.fit(new_x_train, new_y_train)
# trees[t] = ensemble_learner
# for T in iterations:
# # predicting with weak leaners (small trees)
# train_pred = np.sign(np.sum([trees[t].predict(samples_train) for t in range(T)], axis = 0))
# train_errors.append (1 - np.mean(train_pred == tags_train))
# train_var.append(train_pred.var())
# test_pred = np.sign(np.sum([trees[t].predict(samples_test) for t in range(T)], axis = 0))
# test_errors.append (1 - np.mean(test_pred == tags_test))
# test_var.append(test_pred.var())
# return train_errors, test_errors, train_var, test_var, trees
# train_errors, test_errors, train_var, test_var, trees = train_bootstrap()
# # line - train error
# train_error = train_errors[-1]
# # lines - test errors
# single_stump_test_error = test_errors[0]
# deep_tree_test_error = 1 - np.mean(tree.DecisionTreeClassifier(max_depth=250).fit(samples_train, tags_train).predict(samples_test) == tags_test)
# # Form grid of points to use for plotting decision boundaries
# lims = np.array([samples_train.min(axis=0), samples_train.max(axis=0)]).T + np.array([-.2, .2])
# xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# # # Retrieve model train error at each iteration of fitting
# # staged_scores = test_errors
# # # Predict labels of grid points at each iteration of fitting
# # staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# # Create animation frames
# # frames = []
# # for i in range(num_of_trees):
# # frames.append(go.Frame(
# # data=[
# # # Scatter of sample weights
# # go.Scatter(x=samples_train[:,0], y= samples_train[:,1], mode='markers', showlegend=False, marker=dict(color=tags_train, colorscale=class_colors(2),
# # size=np.maximum(2, np.ones(8*5))),
# # xaxis="x", yaxis="y"),
# # # Staged decision surface
# # go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=trees[i].predict(np.vstack([xx, yy]).T)),
# # mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# # # Scatter of train samples with true class
# # go.Scatter(x=samples_train[:,0], y=samples_train[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
# # marker=dict(color=tags_train, colorscale=class_colors(2), symbol=class_symbols[tags_train])),
# # # Scatter of staged score
# # go.Scatter(x=list(range(i)), y=test_errors[:i], mode='lines+markers', showlegend=False, marker_color="black",
# # xaxis="x3", yaxis="y3")
# # ],
# # layout = go.Layout(title = rf"hh"),
# # traces=[0, 1, 2, 3]))
# # fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
# # subplot_titles=(r"$\hh",
# # r"$\hh"),
# # specs=[[{}, {}], [{"colspan": 2}, None]])\
# # .add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
# # .update(frames = frames)\
# # .update_layout(title=frames[0].layout.title,
# # updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
# # width=600, height=550, margin=dict(t=100))\
# # .update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
# # .update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
# # fig.show()
###Output
_____no_output_____
###Markdown
Lab 07 - Bagging Decision TreesIn the previous lab we discussed the [bias-variance tradeoff](https://github.com/GreenGilad/IML.HUJI/blob/master/lab/Lab%2007%20-%20Bias-Variance%20Trade-off.ipynb) and saw how:- The less complex a model is the higher is its variance and lower is its bias. We say in this case that the model is underfitted.- The more complex a model is the higher is its bias and lower is its variance. We say in this case that the model is overfitted.In this lab we will use the power of ensemble methods to fit a set of models, each with a low complexity, to achieve better performances while avoiding overfitting. We use the hypothesis class of decision trees and *bag* multiple trees into a single ensemble.
###Code
import sys
sys.path.append("../")
from utils import *
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.utils import resample
import matplotlib.pyplot as plt
import itertools
symbols = np.array(["circle", "x"])
###Output
_____no_output_____
###Markdown
???????????????????????????
###Code
d, n_train, n_test = 8, 3000, 500
X, y = create_data_bagging_utils(d=d, n_samples = n_train + n_test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=n_test, random_state=42)
go.Figure(data=go.Scatter(x=X[:,0], y=X[:,1], mode="markers", showlegend=False,
marker=dict(color=y, symbol=symbols[y], colorscale=[custom[0], custom[-1]])),
layout=go.Layout(title=rf"$\textbf{{(1) Tree Dataset - True Depth {d}}}$",
xaxis_title=r"$x_1$",
yaxis_title=r"$x_2$"))
###Output
_____no_output_____
###Markdown
Creation of bootstrap samplesNow, after we understand better the different datasets we're creating for the bootstrap algorithm, let's see the different models (trees) that are created from these datasets. We'll take 2 bootstrap datasets, fit a decision tree of depth of 2 to each dataset and plot the trees
###Code
_, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (10,2), dpi=300)
for i in range(2):
idx = resample(range(len(X_train)), replace = True, n_samples = len(X_train))
fit = DecisionTreeClassifier(max_depth=2).fit(X_train[idx], y_train[idx])
plot_tree(fit, filled = True, impurity=False, class_names=["O", "X"], ax = axes[i])
plt.show()
###Output
_____no_output_____
###Markdown
Now, Let's create 11 bootstrap datasets from our train data, each with 1000 samples.Each bootstrap dataset is built by choosing samples from the train data randomly with replcement, that means it is built in the next way:1. choose a sample randomly from the train data and add it to the bootstrap dataset2. keep the sample in the train data so that it can be re-selected to the same bootstrap dataset (and of course to the other bootstrap datasets)
###Code
bootstrap_sets = [set(resample(range(len(X_train)), replace = True, n_samples = len(X_train))) for _ in range(100)]
overlap = [len(bootstrap_sets[i].intersection(bootstrap_sets[j])) / len(X_train)
for i, j in (itertools.combinations(range(len(bootstrap_sets)), 2))]
print(f"Average overlap between bootstrap samples of {round(100*np.mean(overlap), 2)}% " +
f"with variance of {round(np.var(overlap), 7)}")
###Output
_____no_output_____
###Markdown
Construct trees
###Code
from multiprocessing import Pool
def fit_bootstrap_tree(depth, X, y):
idx = resample(range(len(X)), replace = True, n_samples = len(X))
return DecisionTreeClassifier(max_depth=depth).fit(X[idx], y[idx])
trees = [fit_bootstrap_tree(2, X_train, y_train) for _ in range(50)]
preds = np.array([t.predict(X_test) for t in trees])
np.mean((np.cumsum(preds, axis=0) >= len(trees)/2) == 1, axis=1)
# num_of_trees = 400 # number of weak learners
# iterations = np.arange(0, num_of_trees, 5)
# depth = 3
# def train_bootstrap():
# all_indexes = np.arange(len(samples_train))
# train_errors = []
# test_errors = []
# train_var = []
# test_var = []
# trees = np.zeros(shape = num_of_trees, dtype=object)
# for t in range(num_of_trees):
# # resample new dataset(with replacement)
# indexes = resample(all_indexes, replace = True, n_samples = 1000)
# new_x_train, new_y_train = samples_train[indexes], tags_train[indexes]
# ensemble_learner = tree.DecisionTreeClassifier(max_depth=depth)
# ensemble_learner.fit(new_x_train, new_y_train)
# trees[t] = ensemble_learner
# for T in iterations:
# # predicting with weak leaners (small trees)
# train_pred = np.sign(np.sum([trees[t].predict(samples_train) for t in range(T)], axis = 0))
# train_errors.append (1 - np.mean(train_pred == tags_train))
# train_var.append(train_pred.var())
# test_pred = np.sign(np.sum([trees[t].predict(samples_test) for t in range(T)], axis = 0))
# test_errors.append (1 - np.mean(test_pred == tags_test))
# test_var.append(test_pred.var())
# return train_errors, test_errors, train_var, test_var, trees
# train_errors, test_errors, train_var, test_var, trees = train_bootstrap()
# # line - train error
# train_error = train_errors[-1]
# # lines - test errors
# single_stump_test_error = test_errors[0]
# deep_tree_test_error = 1 - np.mean(tree.DecisionTreeClassifier(max_depth=250).fit(samples_train, tags_train).predict(samples_test) == tags_test)
# # Form grid of points to use for plotting decision boundaries
# lims = np.array([samples_train.min(axis=0), samples_train.max(axis=0)]).T + np.array([-.2, .2])
# xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# # # Retrieve model train error at each iteration of fitting
# # staged_scores = test_errors
# # # Predict labels of grid points at each iteration of fitting
# # staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# # Create animation frames
# # frames = []
# # for i in range(num_of_trees):
# # frames.append(go.Frame(
# # data=[
# # # Scatter of sample weights
# # go.Scatter(x=samples_train[:,0], y= samples_train[:,1], mode='markers', showlegend=False, marker=dict(color=tags_train, colorscale=class_colors(2),
# # size=np.maximum(2, np.ones(8*5))),
# # xaxis="x", yaxis="y"),
# # # Staged decision surface
# # go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=trees[i].predict(np.vstack([xx, yy]).T)),
# # mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# # # Scatter of train samples with true class
# # go.Scatter(x=samples_train[:,0], y=samples_train[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
# # marker=dict(color=tags_train, colorscale=class_colors(2), symbol=class_symbols[tags_train])),
# # # Scatter of staged score
# # go.Scatter(x=list(range(i)), y=test_errors[:i], mode='lines+markers', showlegend=False, marker_color="black",
# # xaxis="x3", yaxis="y3")
# # ],
# # layout = go.Layout(title = rf"hh"),
# # traces=[0, 1, 2, 3]))
# # fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
# # subplot_titles=(r"$\hh",
# # r"$\hh"),
# # specs=[[{}, {}], [{"colspan": 2}, None]])\
# # .add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
# # .update(frames = frames)\
# # .update_layout(title=frames[0].layout.title,
# # updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
# # width=600, height=550, margin=dict(t=100))\
# # .update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
# # .update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
# # fig.show()
###Output
_____no_output_____
###Markdown
Lab 07 - Bagging Decision TreesIn the previous lab we discussed the [bias-variance tradeoff](https://github.com/GreenGilad/IML.HUJI/blob/master/lab/Lab%2007%20-%20Bias-Variance%20Trade-off.ipynb) and saw how:- The less complex a model is the higher is its variance and lower is its bias. We say in this case that the model is underfitted.- The more complex a model is the higher is its bias and lower is its variance. We say in this case that the model is overfitted.In this lab we will use the power of ensemble methods to fit a set of models, each with a low complexity, to achieve better performances while avoiding overfitting. We use the hypothesis class of decision trees and *bag* multiple trees into a single ensemble.
###Code
import sys
sys.path.append("../")
from utils import *
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.utils import resample
import matplotlib.pyplot as plt
import itertools
symbols = np.array(["circle", "x"])
###Output
_____no_output_____
###Markdown
???????????????????????????
###Code
d, n_train, n_test = 8, 3000, 500
X, y = create_data_bagging_utils(d=d, n_samples = n_train + n_test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=n_test, random_state=42)
go.Figure(data=go.Scatter(x=X[:,0], y=X[:,1], mode="markers", showlegend=False,
marker=dict(color=y, symbol=symbols[y], colorscale=[custom[0], custom[-1]])),
layout=go.Layout(title=rf"$\textbf{{(1) Tree Dataset - True Depth {d}}}$",
xaxis_title=r"$x_1$",
yaxis_title=r"$x_2$"))
###Output
_____no_output_____
###Markdown
Creation of bootstrap samplesNow, after we understand better the different datasets we're creating for the bootstrap algorithm, let's see the different models (trees) that are created from these datasets. We'll take 2 bootstrap datasets, fit a decision tree of depth of 2 to each dataset and plot the trees
###Code
_, axes = plt.subplots(nrows = 1, ncols = 2, figsize = (10,2), dpi=300)
for i in range(2):
idx = resample(range(len(X_train)), replace = True, n_samples = len(X_train))
fit = DecisionTreeClassifier(max_depth=2).fit(X_train[idx], y_train[idx])
plot_tree(fit, filled = True, impurity=False, class_names=["O", "X"], ax = axes[i])
plt.show()
###Output
_____no_output_____
###Markdown
Now, Let's create 11 bootstrap datasets from our train data, each with 1000 samples.Each bootstrap dataset is built by choosing samples from the train data randomly with replcement, that means it is built in the next way:1. choose a sample randomly from the train data and add it to the bootstrap dataset2. keep the sample in the train data so that it can be re-selected to the same bootstrap dataset (and of course to the other bootstrap datasets)
###Code
bootstrap_sets = [set(resample(range(len(X_train)), replace = True, n_samples = len(X_train))) for _ in range(100)]
overlap = [len(bootstrap_sets[i].intersection(bootstrap_sets[j])) / len(X_train)
for i, j in (itertools.combinations(range(len(bootstrap_sets)), 2))]
print(f"Average overlap between bootstrap samples of {round(100*np.mean(overlap), 2)}% " +
f"with variance of {round(np.var(overlap), 7)}")
###Output
Average overlap between bootstrap samples of 39.76% with variance of 4.55e-05
###Markdown
Construct trees
###Code
from multiprocessing import Pool
def fit_bootstrap_tree(depth, X, y):
idx = resample(range(len(X)), replace = True, n_samples = len(X))
return DecisionTreeClassifier(max_depth=depth).fit(X[idx], y[idx])
trees = [fit_bootstrap_tree(2, X_train, y_train) for _ in range(50)]
preds = np.array([t.predict(X_test) for t in trees])
np.mean((np.cumsum(preds, axis=0) >= len(trees)/2) == 1, axis=1)
# num_of_trees = 400 # number of weak learners
# iterations = np.arange(0, num_of_trees, 5)
# depth = 3
# def train_bootstrap():
# all_indexes = np.arange(len(samples_train))
# train_errors = []
# test_errors = []
# train_var = []
# test_var = []
# trees = np.zeros(shape = num_of_trees, dtype=object)
# for t in range(num_of_trees):
# # resample new dataset(with replacement)
# indexes = resample(all_indexes, replace = True, n_samples = 1000)
# new_x_train, new_y_train = samples_train[indexes], tags_train[indexes]
# ensemble_learner = tree.DecisionTreeClassifier(max_depth=depth)
# ensemble_learner.fit(new_x_train, new_y_train)
# trees[t] = ensemble_learner
# for T in iterations:
# # predicting with weak leaners (small trees)
# train_pred = np.sign(np.sum([trees[t].predict(samples_train) for t in range(T)], axis = 0))
# train_errors.append (1 - np.mean(train_pred == tags_train))
# train_var.append(train_pred.var())
# test_pred = np.sign(np.sum([trees[t].predict(samples_test) for t in range(T)], axis = 0))
# test_errors.append (1 - np.mean(test_pred == tags_test))
# test_var.append(test_pred.var())
# return train_errors, test_errors, train_var, test_var, trees
# train_errors, test_errors, train_var, test_var, trees = train_bootstrap()
# # line - train error
# train_error = train_errors[-1]
# # lines - test errors
# single_stump_test_error = test_errors[0]
# deep_tree_test_error = 1 - np.mean(tree.DecisionTreeClassifier(max_depth=250).fit(samples_train, tags_train).predict(samples_test) == tags_test)
# # Form grid of points to use for plotting decision boundaries
# lims = np.array([samples_train.min(axis=0), samples_train.max(axis=0)]).T + np.array([-.2, .2])
# xx, yy = list(map(np.ravel, np.meshgrid(np.arange(*lims[0], .2), np.arange(*lims[1], .2))))
# # # Retrieve model train error at each iteration of fitting
# # staged_scores = test_errors
# # # Predict labels of grid points at each iteration of fitting
# # staged_predictions = np.array(list(model.staged_predict(np.vstack([xx, yy]).T)))
# # Create animation frames
# # frames = []
# # for i in range(num_of_trees):
# # frames.append(go.Frame(
# # data=[
# # # Scatter of sample weights
# # go.Scatter(x=samples_train[:,0], y= samples_train[:,1], mode='markers', showlegend=False, marker=dict(color=tags_train, colorscale=class_colors(2),
# # size=np.maximum(2, np.ones(8*5))),
# # xaxis="x", yaxis="y"),
# # # Staged decision surface
# # go.Scatter(x=xx, y=yy, marker=dict(symbol = "square", colorscale=custom, color=trees[i].predict(np.vstack([xx, yy]).T)),
# # mode='markers', opacity = 0.4, showlegend=False, xaxis="x2", yaxis="y2"),
# # # Scatter of train samples with true class
# # go.Scatter(x=samples_train[:,0], y=samples_train[:,1], mode='markers', showlegend=False, xaxis="x2", yaxis="y2",
# # marker=dict(color=tags_train, colorscale=class_colors(2), symbol=class_symbols[tags_train])),
# # # Scatter of staged score
# # go.Scatter(x=list(range(i)), y=test_errors[:i], mode='lines+markers', showlegend=False, marker_color="black",
# # xaxis="x3", yaxis="y3")
# # ],
# # layout = go.Layout(title = rf"hh"),
# # traces=[0, 1, 2, 3]))
# # fig = make_subplots(rows=2, cols=2, row_heights=[350, 200],
# # subplot_titles=(r"$\hh",
# # r"$\hh"),
# # specs=[[{}, {}], [{"colspan": 2}, None]])\
# # .add_traces(data=frames[0].data, rows=[1,1,1,2], cols=[1,2,2,1])\
# # .update(frames = frames)\
# # .update_layout(title=frames[0].layout.title,
# # updatemenus = [dict(type="buttons", buttons=[AnimationButtons.play(), AnimationButtons.pause()])],
# # width=600, height=550, margin=dict(t=100))\
# # .update_yaxes(range=[min(staged_scores)-.1, 1.1], autorange=False, row=2, col=1)\
# # .update_xaxes(range=[0, len(frames)], autorange=False, row=2, col=1)
# # fig.show()
###Output
_____no_output_____ |
examples/NonCOG-NASA-CMR.ipynb | ###Markdown
HDF assets from NASA-CMR
###Code
from eolearn.stac import STACInputTask
from eolearn.core import SaveTask, OverwritePermission, LinearWorkflow, EOExecutor
from sentinelhub import BBox, CRS
from datetime import date
import logging
import os
OUTPUT_DIR = "mod11a2-patches"
CACHE_DIR = "mod11a2-cache"
os.makedirs(OUTPUT_DIR, exist_ok=True)
add_data = STACInputTask(
catalog_url="http://localhost:3000/dev/stac/LPDAAC_ECS",
collection_name="MOD11A2.v006",
assets={"data"},
subdataset="HDF4_EOS:EOS_GRID:%s:MODIS_Grid_8Day_1km_LST:LST_Day_1km",
)
save = SaveTask(
OUTPUT_DIR,
overwrite_permission=OverwritePermission.OVERWRITE_PATCH,
)
workflow = LinearWorkflow(
add_data,
save
)
bboxes = [
BBox([-64.3097860799999808,-31.5249839339999767, -64.0573813509999468,-31.3085281599999803], CRS('4326'))
]
time_interval = (date(2020, 10, 1), date(2020, 10, 31))
###Output
_____no_output_____
###Markdown
Debug
###Code
# logging.basicConfig(level=logging.DEBUG)
# add_data.execute(bbox=bboxes[0], time_interval=time_interval)
###Output
_____no_output_____
###Markdown
Workflow
###Code
execution_args = []
for i, bbox in enumerate(bboxes):
execution_args.append(
{
add_data: {"bbox": bbox, "time_interval": time_interval},
save: {"eopatch_folder": str(i)},
}
)
executor = EOExecutor(workflow, execution_args, save_logs=True)
executor.run(workers=5, multiprocess=True)
executor.make_report()
failed_ids = executor.get_failed_executions()
if failed_ids:
raise RuntimeError(
f"Execution failed EOPatches with IDs:\n{failed_ids}\n"
f"For more info check report at {executor.get_report_filename()}"
)
###Output
_____no_output_____
###Markdown
Visualize patches
###Code
from eolearn.core import EOPatch
import matplotlib.pyplot as plt
patch = EOPatch.load(os.path.join(OUTPUT_DIR, "0"))
patch
img = patch.data['BANDS'][0, ..., 0]
valid_img = img[img > 0]
valid_img.min(), valid_img.max()
plt.imshow(img, vmin=15175, vmax=15579, cmap='inferno')
###Output
_____no_output_____ |
tutorials/crowdsourcing/Crowdsourced_Sentiment_Analysis-pandas.ipynb | ###Markdown
Training a Sentiment Analysis LSTM Using Noisy Crowd Labels This is a version of the [crowdsourcing tutorial that uses PySpark](https://github.com/HazyResearch/snorkel/blob/master/tutorials/crowdsourcing/Crowdsourced_Sentiment_Analysis.ipynb) using Pandas instead of SparkSQL.In this tutorial, we'll provide a simple walkthrough of how to use Snorkel to resolve conflicts in a noisy crowdsourced dataset for a sentiment analysis task, and then use these denoised labels to train an LSTM sentiment analysis model which can be applied to new, unseen data to automatically make predictions!1. Creating basic Snorkel objects: `Candidates`, `Contexts`, and `Labels`2. Training the `GenerativeModel` to resolve labeling conflicts3. Training a simple LSTM sentiment analysis model, which can then be used on new, unseen data!Note that this is a simple tutorial meant to give an overview of the mechanics of using Snorkel-- we'll note places where more careful fine-tuning could be done! Task Detail: Weather Sentiments in TweetsIn this tutorial we focus on the [Weather sentiment](https://www.crowdflower.com/data/weather-sentiment/) task from [Crowdflower](https://www.crowdflower.com/).In this task, contributors were asked to grade the sentiment of a particular tweet relating to the weather. Contributors could choose among the following categories:1. Positive2. Negative3. I can't tell4. Neutral / author is just sharing information5. Tweet not related to weather conditionThe catch is that 20 contributors graded each tweet. Thus, in many cases contributors assigned conflicting sentiment labels to the same tweet. The task comes with two data files (to be found in the `data` directory of the tutorial:1. [weather-non-agg-DFE.csv](data/weather-non-agg-DFE.csv) contains the raw contributor answers for each of the 1,000 tweets.2. [weather-evaluated-agg-DFE.csv](data/weather-evaluated-agg-DFE.csv) contains gold sentiment labels by trusted workers for each of the 1,000 tweets.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Step 1: Preprocessing - Data Loading We load the raw data for our crowdsourcing task (stored in a local csv file) into a dataframe
###Code
import pandas as pd
# Load Raw Crowdsourcing Data
raw_crowd_answers = pd.read_csv("data/weather-non-agg-DFE.csv")
# Load Groundtruth Crowdsourcing Data
gold_crowd_answers = pd.read_csv("data/weather-evaluated-agg-DFE.csv")
# Filter out low-confidence answers
gold_answers = gold_crowd_answers[['tweet_id', 'sentiment', 'tweet_body']][(gold_crowd_answers.correct_category == 'Yes') & (gold_crowd_answers.correct_category_conf == 1)]
# Keep Only the Tweets with Available Groundtruth
candidate_labeled_tweets = raw_crowd_answers.join(gold_answers.set_index('tweet_id',drop=False),on=['tweet_id'],lsuffix='.raw',rsuffix='.gold',how='inner')
candidate_labeled_tweets = candidate_labeled_tweets[['tweet_id.raw','tweet_body.raw','worker_id','emotion']]
candidate_labeled_tweets.columns = ['tweet_id','tweet_body','worker_id','emotion']
###Output
_____no_output_____
###Markdown
As mentioned above, contributors can provide conflicting labels for the same tweet:
###Code
candidate_labeled_tweets.sort_values(['worker_id','tweet_id']).head()
###Output
_____no_output_____
###Markdown
Step 2: Generating Snorkel Objects `Candidates``Candidates` are the core objects in Snorkel representing objects to be classified. We'll use a helper function to create a custom `Candidate` sub-class, `Tweet`, with values representing the possible labels that it can be classified with:
###Code
from snorkel.models import candidate_subclass
values = list(candidate_labeled_tweets.emotion.unique())
Tweet = candidate_subclass('Tweet', ['tweet'], values=values)
###Output
_____no_output_____
###Markdown
`Contexts`All `Candidate` objects point to one or more `Context` objects, which represent the raw data that they are rooted in. In this case, our candidates will each point to a single `Context` object representing the raw text of the tweet.Once we have defined the `Context` for each `Candidate`, we can commit them to the database. Note that we also split into two sets while doing this:1. **Training set (`split=0`):** The tweets for which we have noisy, conflicting crowd labels; we will resolve these conflicts using the `GenerativeModel` and then use them as training data for the LSTM2. **Test set (`split=1`):** We will pretend that we do not have any crowd labels for this split of the data, and use these to test the LSTM's performance on unseen data
###Code
from snorkel.models import Context, Candidate
from snorkel.contrib.models.text import RawText
# Make sure DB is cleared
session.query(Context).delete()
session.query(Candidate).delete()
# Now we create the candidates with a simple loop
tweet_bodies = candidate_labeled_tweets \
[["tweet_id", "tweet_body"]] \
.sort_values("tweet_id") \
.drop_duplicates()
# Generate and store the tweet candidates to be classified
# Note: We split the tweets in two sets: one for which the crowd
# labels are not available to Snorkel (test, 10%) and one for which we assume
# crowd labels are obtained (to be used for training, 90%)
total_tweets = len(tweet_bodies)
tweet_list = []
test_split = total_tweets*0.1
for i, t in tweet_bodies.iterrows():
split = 1 if i <= test_split else 0
raw_text = RawText(stable_id=t.tweet_id, name=t.tweet_id, text=t.tweet_body)
tweet = Tweet(tweet=raw_text, split=split)
tweet_list.append(tweet)
session.add(tweet)
session.commit()
###Output
_____no_output_____
###Markdown
`Labels`Next, we'll store the labels for each of the training candidates in a sparse matrix (which will also automatically be saved to the Snorkel database), with one row for each candidate and one column for each crowd worker:
###Code
from snorkel.annotations import LabelAnnotator
from collections import defaultdict
# Extract worker votes
# Cache locally to speed up for this small set
worker_labels = candidate_labeled_tweets[["tweet_id", "worker_id", "emotion"]]
wls = defaultdict(list)
for i, row in worker_labels.iterrows():
wls[str(row.tweet_id)].append((str(row.worker_id), row.emotion))
# Create a label generator
def worker_label_generator(t):
"""A generator over the different (worker_id, label_id) pairs for a Tweet."""
for worker_id, label in wls[t.tweet.name]:
yield worker_id, label
labeler = LabelAnnotator(label_generator=worker_label_generator)
%time L_train = labeler.apply(split=0)
L_train
###Output
0%| | 0/629 [00:00<?, ?it/s]
###Markdown
Finally, we load the ground truth ("gold") labels for both the training and test sets, and store as numpy arrays"
###Code
gold_labels = defaultdict(list)
# Get gold labels in verbose form
verbose_labels = dict([(str(t.tweet_id), t.sentiment)
for i, t in gold_answers[["tweet_id", "sentiment"]].iterrows()])
# Iterate over splits, align with Candidate ordering
for split in range(2):
cands = session.query(Tweet).filter(Tweet.split == split).order_by(Tweet.id).all()
for c in cands:
# Think this is just an odd way of label encoding between 1 and 5?
gold_labels[split].append(values.index(verbose_labels[c.tweet.name]) + 1)
train_cand_labels = np.array(gold_labels[0])
test_cand_labels = np.array(gold_labels[1])
###Output
_____no_output_____
###Markdown
Step 3: Resolving Crowd Conflicts with the Generative ModelUntil now we have converted the raw crowdsourced data into a labeling matrix that can be provided as input to `Snorkel`. We will now show how to:1. Use `Snorkel's` generative model to learn the accuracy of each crowd contributor.2. Use the learned model to estimate a marginal distribution over the domain of possible labels for each task.3. Use the estimated marginal distribution to obtain the maximum a posteriori probability estimate for the label that each task takes.
###Code
# Imports
from snorkel.learning.gen_learning import GenerativeModel
# Initialize Snorkel's generative model for
# learning the different worker accuracies.
gen_model = GenerativeModel(lf_propensity=True)
# Train the generative model
gen_model.train(
L_train,
reg_type=2,
reg_param=0.1,
epochs=30
)
###Output
Inferred cardinality: 5
###Markdown
Infering the MAP assignment for each taskEach task corresponds to an independent random variable. Thus, we can simply associate each task with the most probably label based on the estimated marginal distribution and get an accuracy score:
###Code
accuracy = gen_model.score(L_train, train_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
###Output
Accuracy: 0.9952305246
###Markdown
Majority voteIt seems like we did well- but how well? Given that this is a fairly simple task--we have 20 contributors per tweet (and most of them are far better than random)--**we expect majority voting to perform extremely well**, so we can check against majority vote:
###Code
from collections import Counter
# Collect the majority vote answer for each tweet
mv = []
for i in range(L_train.shape[0]):
c = Counter([L_train[i,j] for j in L_train[i].nonzero()[1]])
mv.append(c.most_common(1)[0][0])
mv = np.array(mv)
# Count the number correct by majority vote
n_correct = np.sum([1 for i in range(L_train.shape[0]) if mv[i] == train_cand_labels[i]])
print ("Accuracy:{}".format(n_correct / float(L_train.shape[0])))
print ("Number incorrect:{}".format(L_train.shape[0] - n_correct))
###Output
Accuracy:0.985691573927
Number incorrect:9
###Markdown
We see that while majority vote makes 10 errors, the Snorkel model makes only 3! What about an average crowd worker? Average human accuracyWe see that the average accuracy of a single crowd worker is in fact much lower:
###Code
accs = []
for j in range(L_train.shape[1]):
n_correct = np.sum([1 for i in range(L_train.shape[0]) if L_train[i,j] == train_cand_labels[i]])
acc = n_correct / float(L_train[:,j].nnz)
accs.append(acc)
print( "Mean Accuracy:{}".format( np.mean(accs)))
###Output
Mean Accuracy:0.729664764868
###Markdown
Step 4: Training an ML Model with Snorkel for Sentiment Analysis over Unseen TweetsIn the previous step, we saw that Snorkel's generative model can help to denoise crowd labels automatically. However, what happens when we don't have noisy crowd labels for a tweet?In this step, we'll use the estimates of the generative model as _probabilistic training labels_ to train a simple LSTM sentiment analysis model, which takes as input a tweet **for which no crowd labels are available** and predicts its sentiment.First, we get the probabilistic training labels (_training marginals_) which are just the marginal estimates of the generative model:
###Code
train_marginals = gen_model.marginals(L_train)
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
###Output
Saved 629 marginals
###Markdown
Next, we'll train a simple LSTM:
###Code
from snorkel.learning.tensorflow import TextRNN
train_kwargs = {
'lr': 0.01,
'dim': 100,
'n_epochs': 200,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=1701, cardinality=Tweet.cardinality)
train_cands = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
lstm.train(train_cands, train_marginals, **train_kwargs)
test_cands = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
accuracy = lstm.score(test_cands, test_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
###Output
Accuracy: 0.6666666667
###Markdown
Training a Sentiment Analysis LSTM Using Noisy Crowd Labels This is a version of the [crowdsourcing tutorial that uses PySpark](https://github.com/HazyResearch/snorkel/blob/master/tutorials/crowdsourcing/Crowdsourced_Sentiment_Analysis.ipynb) using Pandas instead of SparkSQL.In this tutorial, we'll provide a simple walkthrough of how to use Snorkel to resolve conflicts in a noisy crowdsourced dataset for a sentiment analysis task, and then use these denoised labels to train an LSTM sentiment analysis model which can be applied to new, unseen data to automatically make predictions!1. Creating basic Snorkel objects: `Candidates`, `Contexts`, and `Labels`2. Training the `GenerativeModel` to resolve labeling conflicts3. Training a simple LSTM sentiment analysis model, which can then be used on new, unseen data!Note that this is a simple tutorial meant to give an overview of the mechanics of using Snorkel-- we'll note places where more careful fine-tuning could be done! Task Detail: Weather Sentiments in TweetsIn this tutorial we focus on the [Weather sentiment](https://www.crowdflower.com/data/weather-sentiment/) task from [Crowdflower](https://www.crowdflower.com/).In this task, contributors were asked to grade the sentiment of a particular tweet relating to the weather. Contributors could choose among the following categories:1. Positive2. Negative3. I can't tell4. Neutral / author is just sharing information5. Tweet not related to weather conditionThe catch is that 20 contributors graded each tweet. Thus, in many cases contributors assigned conflicting sentiment labels to the same tweet. The task comes with two data files (to be found in the `data` directory of the tutorial:1. [weather-non-agg-DFE.csv](data/weather-non-agg-DFE.csv) contains the raw contributor answers for each of the 1,000 tweets.2. [weather-evaluated-agg-DFE.csv](data/weather-evaluated-agg-DFE.csv) contains gold sentiment labels by trusted workers for each of the 1,000 tweets.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
###Output
_____no_output_____
###Markdown
Step 1: Preprocessing - Data Loading We load the raw data for our crowdsourcing task (stored in a local csv file) into a dataframe
###Code
import pandas as pd
# Load Raw Crowdsourcing Data
raw_crowd_answers = pd.read_csv("data/weather-non-agg-DFE.csv")
# Load Groundtruth Crowdsourcing Data
gold_crowd_answers = pd.read_csv("data/weather-evaluated-agg-DFE.csv")
# Filter out low-confidence answers
gold_answers = gold_crowd_answers[['tweet_id', 'sentiment', 'tweet_body']][(gold_crowd_answers.correct_category == 'Yes') & (gold_crowd_answers.correct_category_conf == 1)]
# Keep Only the Tweets with Available Groundtruth
candidate_labeled_tweets = raw_crowd_answers.join(gold_answers.set_index('tweet_id',drop=False),on=['tweet_id'],lsuffix='.raw',rsuffix='.gold',how='inner')
candidate_labeled_tweets = candidate_labeled_tweets[['tweet_id.raw','tweet_body.raw','worker_id','emotion']]
candidate_labeled_tweets.columns = ['tweet_id','tweet_body','worker_id','emotion']
###Output
_____no_output_____
###Markdown
As mentioned above, contributors can provide conflicting labels for the same tweet:
###Code
candidate_labeled_tweets.sort_values(['worker_id','tweet_id']).head()
###Output
_____no_output_____
###Markdown
Step 2: Generating Snorkel Objects `Candidates``Candidates` are the core objects in Snorkel representing objects to be classified. We'll use a helper function to create a custom `Candidate` sub-class, `Tweet`, with values representing the possible labels that it can be classified with:
###Code
from snorkel.models import candidate_subclass
values = list(candidate_labeled_tweets.emotion.unique())
Tweet = candidate_subclass('Tweet', ['tweet'], values=values)
###Output
_____no_output_____
###Markdown
`Contexts`All `Candidate` objects point to one or more `Context` objects, which represent the raw data that they are rooted in. In this case, our candidates will each point to a single `Context` object representing the raw text of the tweet.Once we have defined the `Context` for each `Candidate`, we can commit them to the database. Note that we also split into two sets while doing this:1. **Training set (`split=0`):** The tweets for which we have noisy, conflicting crowd labels; we will resolve these conflicts using the `GenerativeModel` and then use them as training data for the LSTM2. **Test set (`split=1`):** We will pretend that we do not have any crowd labels for this split of the data, and use these to test the LSTM's performance on unseen data
###Code
from snorkel.models import Context, Candidate
from snorkel.contrib.models.text import RawText
# Make sure DB is cleared
session.query(Context).delete()
session.query(Candidate).delete()
# Now we create the candidates with a simple loop
tweet_bodies = candidate_labeled_tweets \
[["tweet_id", "tweet_body"]] \
.sort_values("tweet_id") \
.drop_duplicates()
# Generate and store the tweet candidates to be classified
# Note: We split the tweets in two sets: one for which the crowd
# labels are not available to Snorkel (test, 10%) and one for which we assume
# crowd labels are obtained (to be used for training, 90%)
total_tweets = len(tweet_bodies)
tweet_list = []
test_split = total_tweets*0.1
for i, t in tweet_bodies.iterrows():
split = 1 if i <= test_split else 0
raw_text = RawText(stable_id=t.tweet_id, name=t.tweet_id, text=t.tweet_body)
tweet = Tweet(tweet=raw_text, split=split)
tweet_list.append(tweet)
session.add(tweet)
session.commit()
###Output
_____no_output_____
###Markdown
`Labels`Next, we'll store the labels for each of the training candidates in a sparse matrix (which will also automatically be saved to the Snorkel database), with one row for each candidate and one column for each crowd worker:
###Code
from snorkel.annotations import LabelAnnotator
from collections import defaultdict
# Extract worker votes
# Cache locally to speed up for this small set
worker_labels = candidate_labeled_tweets[["tweet_id", "worker_id", "emotion"]]
wls = defaultdict(list)
for i, row in worker_labels.iterrows():
wls[str(row.tweet_id)].append((str(row.worker_id), row.emotion))
# Create a label generator
def worker_label_generator(t):
"""A generator over the different (worker_id, label_id) pairs for a Tweet."""
for worker_id, label in wls[t.tweet.name]:
yield worker_id, label
labeler = LabelAnnotator(label_generator=worker_label_generator)
%time L_train = labeler.apply(split=0)
L_train
###Output
2%|▏ | 15/629 [00:00<00:04, 135.13it/s]
###Markdown
Finally, we load the ground truth ("gold") labels for both the training and test sets, and store as numpy arrays"
###Code
gold_labels = defaultdict(list)
# Get gold labels in verbose form
verbose_labels = dict([(str(t.tweet_id), t.sentiment)
for i, t in gold_answers[["tweet_id", "sentiment"]].iterrows()])
# Iterate over splits, align with Candidate ordering
for split in range(2):
cands = session.query(Tweet).filter(Tweet.split == split).order_by(Tweet.id).all()
for c in cands:
# Think this is just an odd way of label encoding between 1 and 5?
gold_labels[split].append(values.index(verbose_labels[c.tweet.name]) + 1)
train_cand_labels = np.array(gold_labels[0])
test_cand_labels = np.array(gold_labels[1])
###Output
_____no_output_____
###Markdown
Step 3: Resolving Crowd Conflicts with the Generative ModelUntil now we have converted the raw crowdsourced data into a labeling matrix that can be provided as input to `Snorkel`. We will now show how to:1. Use `Snorkel's` generative model to learn the accuracy of each crowd contributor.2. Use the learned model to estimate a marginal distribution over the domain of possible labels for each task.3. Use the estimated marginal distribution to obtain the maximum a posteriori probability estimate for the label that each task takes.
###Code
# Imports
from snorkel.learning.gen_learning import GenerativeModel
# Initialize Snorkel's generative model for
# learning the different worker accuracies.
gen_model = GenerativeModel(lf_propensity=True)
# Train the generative model
gen_model.train(
L_train,
reg_type=2,
reg_param=0.1,
epochs=30
)
###Output
Inferred cardinality: 5
###Markdown
Infering the MAP assignment for each taskEach task corresponds to an independent random variable. Thus, we can simply associate each task with the most probably label based on the estimated marginal distribution and get an accuracy score:
###Code
accuracy = gen_model.score(L_train, train_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
###Output
Accuracy: 0.9952305246
###Markdown
Majority voteIt seems like we did well- but how well? Given that this is a fairly simple task--we have 20 contributors per tweet (and most of them are far better than random)--**we expect majority voting to perform extremely well**, so we can check against majority vote:
###Code
from collections import Counter
# Collect the majority vote answer for each tweet
mv = []
for i in range(L_train.shape[0]):
c = Counter([L_train[i,j] for j in L_train[i].nonzero()[1]])
mv.append(c.most_common(1)[0][0])
mv = np.array(mv)
# Count the number correct by majority vote
n_correct = np.sum([1 for i in range(L_train.shape[0]) if mv[i] == train_cand_labels[i]])
print ("Accuracy:{}".format(n_correct / float(L_train.shape[0])))
print ("Number incorrect:{}".format(L_train.shape[0] - n_correct))
###Output
Accuracy:0.9841017488076311
Number incorrect:10
###Markdown
We see that while majority vote makes 10 errors, the Snorkel model makes only 3! What about an average crowd worker? Average human accuracyWe see that the average accuracy of a single crowd worker is in fact much lower:
###Code
accs = []
for j in range(L_train.shape[1]):
n_correct = np.sum([1 for i in range(L_train.shape[0]) if L_train[i,j] == train_cand_labels[i]])
acc = n_correct / float(L_train[:,j].nnz)
accs.append(acc)
print( "Mean Accuracy:{}".format( np.mean(accs)))
train_cands = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
train_cands[0]
###Output
_____no_output_____
###Markdown
Step 4: Training an ML Model with Snorkel for Sentiment Analysis over Unseen TweetsIn the previous step, we saw that Snorkel's generative model can help to denoise crowd labels automatically. However, what happens when we don't have noisy crowd labels for a tweet?In this step, we'll use the estimates of the generative model as _probabilistic training labels_ to train a simple LSTM sentiment analysis model, which takes as input a tweet **for which no crowd labels are available** and predicts its sentiment.First, we get the probabilistic training labels (_training marginals_) which are just the marginal estimates of the generative model:
###Code
train_marginals = gen_model.marginals(L_train)
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
###Output
Saved 629 marginals
###Markdown
Next, we'll train a simple LSTM:
###Code
from snorkel.learning.tensorflow import TextRNN
train_kwargs = {
'lr': 0.01,
'dim': 100,
'n_epochs': 200,
'dropout': 0.2,
'print_freq': 5
}
lstm = TextRNN(seed=1701, cardinality=Tweet.cardinality)
train_cands = session.query(Tweet).filter(Tweet.split == 0).order_by(Tweet.id).all()
lstm.train(train_cands, train_marginals, **train_kwargs)
test_cands = session.query(Tweet).filter(Tweet.split == 1).order_by(Tweet.id).all()
accuracy = lstm.score(test_cands, test_cand_labels)
print("Accuracy: {:.10f}".format(accuracy))
###Output
Accuracy: 0.6666666667
|
DeployModel.ipynb | ###Markdown
###Code
!pip3 install torch torchvision
import torch
import torchvision
model = torchvision.models.resnet18()
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
output = traced_script_module(torch.ones(1, 3, 24, 24))
output[:, :5]
traced_script_module.save("model.pt")
###Output
_____no_output_____ |
projects/learn-python3/notebooks/beginner/exercises/file_io_exercise.ipynb | ###Markdown
1. Sum numbers listed in a fileFill ____ pieces of the code below. `sum_numbers_in_file` function takes a input file path as argument, reads the numbers listed in the input file and returns the sum of those numbers. You can assume that each line contains exactly one numeric value.
###Code
def sum_numbers_in_file(input_file):
sum_ = 0 # A common way to use variable names that collide with built-in/keyword words is to add underscore
with open(input_file, 'r') as file:
for line in file:
check = line.strip() # Remove potential white space
sum_ += float(check)
return sum_
in_file = os.path.join(DATA_DIR, 'numbers.txt')
assert sum_numbers_in_file(in_file) == 189.5
###Output
_____no_output_____
###Markdown
2. Reading first word from each line of a fileImplement `find_first_words` function which takes an input file path as argument. The function should find the first word of each line in the file and return these words as a list. If a line is empty, the returned list should contain an empty string for that line.
###Code
# Your implementation here
def find_first_words(input_file):
ans = []
with open(input_file, 'r') as file:
for line in file:
text = line.split()
if line == '\n':
ans.append("")
else:
ans.append(text[0])
return ans
in_file1 = os.path.join(DATA_DIR, 'simple_file.txt')
in_file2 = os.path.join(DATA_DIR, 'simple_file_with_empty_lines.txt')
expected_file_1 = ['First', 'Second', 'Third', 'And']
assert find_first_words(in_file1) == expected_file_1
expected_file_2 = ['The', '', 'First', 'nonsense', '', 'Then']
assert find_first_words(in_file2) == expected_file_2
###Output
_____no_output_____ |
NLP/R20.ipynb | ###Markdown
Bag of Words model is a model used to preprocess the texts to classify before fitting the classification algorithms on the observations containing the texts.
###Code
# Importing the dataset
dataset_original = read.delim('Restaurant_Reviews.tsv', quote = '', stringsAsFactors = FALSE)
# Cleaning the texts
#install.packages('tm')
#install.packages('SnowballC')
library(tm)
library(SnowballC)
corpus = VCorpus(VectorSource(dataset_original$Review))
corpus = tm_map(corpus, content_transformer(tolower))
corpus = tm_map(corpus, removeNumbers)
corpus = tm_map(corpus, removePunctuation)
corpus = tm_map(corpus, removeWords, stopwords())
corpus = tm_map(corpus, stemDocument)
corpus = tm_map(corpus, stripWhitespace)
# as.character(corpus[[1]])
# Creating the Bag of Words model
dtm = DocumentTermMatrix(corpus)
dtm = removeSparseTerms(dtm, 0.999)
dataset = as.data.frame(as.matrix(dtm))
dataset$Liked = dataset_original$Liked
# Encoding the target feature as factor
dataset$Liked = factor(dataset$Liked, levels = c(0, 1))
# Splitting the dataset into the Training set and Test set
# install.packages('caTools')
library(caTools)
set.seed(123)
split = sample.split(dataset$Liked, SplitRatio = 0.8)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)
# Fitting Random Forest Classification to the Training set
# install.packages('randomForest')
library(randomForest)
classifier = randomForest(x = training_set[-692],
y = training_set$Liked,
ntree = 10)
# Predicting the Test set results
y_pred = predict(classifier, newdata = test_set[-692])
# Making the Confusion Matrix
table(test_set[, 692], y_pred)
###Output
_____no_output_____ |
demo_example.ipynb | ###Markdown
Small-scale study on Tosca
###Code
# Step 1: Preprocessing of chord sequences: not needed in this case
# Step 2: Transposition to the Cmaj key
# Step 3: Encoding of sequences
# Step 4: Computation of the ngrams
# Step 5: Computation of the harmonic similarity
# Step 6: New hsim_map + (optional) updating the graph
import sys
import json
import joblib
import pandas as pd
from ChordalPy.Transposers import transpose
sys.path.insert(0,'src/')
sonar_meta = pd.read_csv("setup/sonar_datasets_meta.csv")
with open("extra/arias.json", "rb") as fo:
arias_raw = json.load(fo)
with open("setup/sonar_ngrams_global.joblib", "rb") as fo:
sonar_brps = joblib.load(fo)
with open("setup/sonar_encoding_bundle.joblib", "rb") as fo:
encoding_bundle = joblib.load(fo)
encdec = encoding_bundle["encoder_decoder"]
a = {1: 3}
track_id, patterns = zip(*a.items())
track_id[0]
test = {track_id[0]: patterns[0] for track_id, patterns in
[zip(*item.items()) for item in sonar_brps]}
test = {track_id[0]: patterns[0] for item in sonar_brps \
for track_id, patterns in zip(*item.items()) }
###Output
_____no_output_____
###Markdown
Step 1: Preprocessing chord sequencesFor this example, we do not need to remove consecutive repetitions as they do not occur.
###Code
chord_map = {
"E:min7(b5)": "E:min7",
"F:min7(b5)": "F:min7"
}
chord_pproc = {}
for track_name, track_ann in arias_raw.items():
chord_pproc[track_name] = []
for chord in track_ann["chord"]:
chord_class = chord[0] # just keep the label
chord_class = chord_map.get(chord_class, chord_class)
chord_pproc[track_name].append(chord_class)
###Output
_____no_output_____
###Markdown
Step 2: Transposition to the C(maj) key
###Code
chord_transp = {}
for track_name in chord_pproc.keys():
track_gkey = arias_raw[track_name]["key"][0]
track_gkey = track_gkey[0].split(":")[0]
# Ready to transpose -- only at the tonic-level
transposed = [] # holds the transpositions
for chord in chord_pproc[track_name]:
transposed.append(transpose(chord, track_gkey))
# Update the transp-specific dict
chord_transp[track_name] = transposed
###Output
_____no_output_____
###Markdown
Step 3: Encoding of chord sequences
###Code
chord_encoded = {}
for track_name, chords in chord_transp.items():
chord_encoded[track_name] = []
for chord in chords: # iterating over chords
try:
hash = encdec._compute_chord_hash(chord)
token = encdec.hash_to_index[hash]
chord_encoded[track_name].append(token)
except:
print(f"Could not encode: {chord}")
###Output
_____no_output_____
###Markdown
Step 4: Computation of ngrams
###Code
from ngrams_lib import extract_ngrams
recurring_patterns = {}
for track_name, chords in chord_encoded.items():
recurring_patterns.update(extract_ngrams(
track_name, chords, n_start=3))
###Output
_____no_output_____
###Markdown
Step 5: Computing the harmonic similarity
###Code
from harmonic_lib import ngram_hsim
hsim_map_mini = {track_id: {} for track_id in list(recurring_patterns.keys())}
for track_name, track_brps in recurring_patterns.items():
for sonar_entry in sonar_brps:
sonar_track = list(sonar_entry.keys())[0]
sonar_bag = list(sonar_entry.values())[0]
# Compute the harmonic similarity
hsim, longest_rps = ngram_hsim(track_brps, sonar_bag)
longest_rps = [[encdec.decode_event(idx) for idx in lsrp_shot] \
for lsrp_shot in longest_rps] # keep and decode
if hsim > 0.: # save only non-trivial
hsim_map_mini[track_name][sonar_track] = hsim, longest_rps
hsim_map_mini
sonar_map = sonar_meta.set_index("id").to_dict("index")
for sonar_matches in hsim_map_mini.values():
for sonar_match in sonar_matches:
print("{}:\tTitle: {}; Artist: {}".format(
sonar_match,
sonar_map[sonar_match]["title"],
sonar_map[sonar_match]["artist"])
)
###Output
isophonics_61: Title: 12_-_Wait; Artist: The Beatles
isophonics_152: Title: 06_-_Tigerfest; Artist: Zweieck
isophonics_164: Title: 15_-_Es_Wird_Alles_Wieder_Gut,_Herr_Professor; Artist: Zweieck
isophonics_191: Title: 03_-_I'm_Only_Sleeping; Artist: The Beatles
isophonics_144: Title: 12_-_Jakob_Und_Marie; Artist: Zweieck
|
notebooks/CoQ/CoQ_SOM.ipynb | ###Markdown
給食の献立提案(後半) - 量子アニーリングソリューションコンテスト 概要2021/12/19に行われた[量子アニーリングソリューションコンテスト](http://www.tfc.tohoku.ac.jp/special/qca/20211218.html)に、T-QARD-949として参加しました。こちらの記事はCoQさんの記事の後半部になります。全体像については前半の記事をお読みください。 原理 自己組織化マップ 自己組織化マップ(Self-Organizing Map, SOM)はクラスタリングの手法の一つで、高次元のデータを二次元のマップに落とし込むことで、視覚的にクラスターを理解できるようにするアルゴリズムです。今回は食材の組み合わせが高次元のデータとなります。もともと献立に入っている料理と量子アニーラから得られた食材の組み合わせを同時に自己組織化マップに投影することで、メニューを提案します。  今回は主菜の部分を実装しました。これと副菜、汁物についても同様に自己組織化マップに投影することで、3つを組み合わせて献立を作成することができます。具体的な見方はのちほど説明します。 実験今回のコードは[こちら](https://www.haya-programming.com/entry/2018/04/07/161249)を参考にしています。まずは必要なライブラリをインストールします。
###Code
!pip install numpy
!pip install matplotlib
!pip install pandas
!pip install scikir-learn
!pip install somoclu
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from somoclu import Somoclu
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
データの加工前半の記事で紹介した方法により得られた食材のリストと、自己組織化マップに投影するデータを読み出します。今回は仙台市の4月の献立のうち、主食のみ取り出します。
###Code
!wget "https://raw.githubusercontent.com/T-QARD/t-wave/main/notebooks/CoQ/school_lunch_sendai.csv"
!wget "https://raw.githubusercontent.com/T-QARD/t-wave/main/notebooks/CoQ/samples.csv"
# 使用する食材のリスト
df_ans = pd.read_csv('samples.csv')
# 主食に使う部分だけを取り出す
dropped_columns = list(df_ans.iloc[:, 4:13].columns) + list(df_ans.iloc[:, 37:].columns)
df_ans.drop(columns=dropped_columns, inplace=True)
df_ans.head()
# 自己組織化マップに投影するデータ
df_data = pd.read_csv('school_lunch_sendai.csv')
df_data = df_data[df_ans.columns] # 主食に使う部分だけを取り出す
df_data.head()
###Output
_____no_output_____
###Markdown
次に、給食の献立データと量子アニーリングマシンから得られた解を一つにまとめます。
###Code
ans = df_ans.loc[2].values # 量子アニーリングマシンから得られた解から一つ選ぶ
X = np.vstack([df_data.values, ans])
# 各データ(料理)にラベルとして数字を割り振る
y = np.arange(len(X))
###Output
_____no_output_____
###Markdown
自己組織化マップへの投影最後に自己組織化マップに投影します。
###Code
som = Somoclu(n_rows=16, n_columns=24, initialization="pca", verbose=3)
som.train(data=X, epochs=1000)
som.view_umatrix(labels=y, bestmatches=True, filename="umatrix.png")
plt.show()
###Output
Warning: data was not float32. A 32-bit copy was made
###Markdown
明るい色ほどデータ間の距離は大きく、暗い色ほど小さいです。つまり、明るい色に囲まれた暗い部分に一緒に入っている点はクラスターと見なすことができ、データとして近くなっています(下図左上) 自己組織化マップの見方が合っているかどうか確かめるために、類似度の行列を表示します。
###Code
som.view_similarity_matrix(labels=y)
plt.show()
###Output
_____no_output_____ |
jupyter_notebook/.ipynb_checkpoints/keys_and_addresses_python_notebook-checkpoint.ipynb | ###Markdown
Keys and Addresses Exercises Requirements $ pip3 install ethereum $ pip3 install bitcoin $ pip3 install pycryptodomex $ pip3 install jupyter
###Code
# Import libraries
import sys
# Vitalik Buterin's Python Library for Bitcoin
# No longer maintained!
# https://pypi.python.org/pypi/bitcoin/1.1.42
import bitcoin
# Vitalik Buterin's Python Library for Ethereum
# https://github.com/ethereum/pyethereum
import ethereum
# pysha3 package - SHA-3 (Keccak) for Python 2.7 - 3.5
# The sha3 module monkey patches the hashlib module.
# The monkey patch is automatically activated with the first import of the sha3 module.
if sys.version_info < (3, 6):
import sha3
# Wrong source of SHA3 (FIPS-202 not Keccak-256)
from hashlib import sha3_256 as hashlib_sha3
# Both FIP-202 SHA-3 and Keccak-256 from pycryptodomex
from Crypto.Hash import SHA3_256 as crypto_sha3
from Crypto.Hash import keccak as crypto_keccak
# Ethereum library implements Keccak, but calls it sha3
from ethereum.utils import sha3 as keccak256
from rlp.utils import decode_hex, encode_hex
###Output
_____no_output_____
###Markdown
We supply a valid private key (in hex format)
###Code
privkey_hex = "f8f8a2f43c8376ccb0871305060d7b27b0554d2cc72bccf41b2705608452f315"
privkey_hex = encode_hex(keccak256(b""))
privkey = decode_hex(privkey_hex)
# Use pybitcointools (bitcoin) library's elliptic curve functions to calculate the public key
pubkey = bitcoin.privtopub(privkey)
pubkey_hex = encode_hex(pubkey)
print("Public Key: " + pubkey_hex)
pubkey_without_prefix = pubkey_hex[2:]
x_hex = pubkey_without_prefix[:64]
y_hex = pubkey_without_prefix[64:]
print("x (hex) : " + x_hex)
print("y (hex) : " + y_hex)
x = int(x_hex, 16)
y = int(y_hex, 16)
print("x (int) : ", x)
print("y (int) : ", y)
# Prove pubkey is a point on the curve
# p is the prime order of the elliptic curve field
p = 115792089237316195423570985008687907853269984665640564039457584007908834671663
(x ** 3 + 7 - y**2) % p
# Which "SHA3" am I using?
# Uncomment below to try various options
#test_hash = hashlib_sha3(b"").hexdigest()
#test_hash = crypto_sha3.new(b"").hexdigest()
#test_hash = crypto_keccak.new(digest_bits=256, data=b"").hexdigest()
test_hash = encode_hex(keccak256(b""))
print(test_hash)
if test_hash == "c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470":
print("Hash Function is Keccak-256")
elif test_hash == "a7ffc6f8bf1ed76651c14756a061d662f580ff4de43b49fa82d80a4b80f8434a":
print("Hash Function is FIP-202 SHA-3")
else:
print("Oops! Can't identify SHA3")
hex_hash = encode_hex(keccak256(decode_hex(pubkey_without_prefix)))
print ("Public Key Hash: " + hex_hash)
address = hex_hash[24:]
print("Ethereum Address: 0x" + address)
# Let's calculate the EIP-55 mixed-capitalization checksum address
# Take the lower-case address and hash it again, to produce a checksum
address_hash_hex = encode_hex(keccak256(address))
print(address_hash_hex)
# Simple implementation of EIP-55
# For every alphabetic character of the address,
# capitalize it if the corresponding character of the hash is greater than 8,
a = ""
for i, c in enumerate(address):
if c in '0123456789':
a = a + c
elif int(address_hash_hex[i], 16) >= 8:
a = a + c.upper()
else:
a = a + c.lower()
print("EIP-55 encoded Ethereum Address: 0x"+a)
###Output
EIP-55 encoded Ethereum Address: 0x001d3F1ef827552Ae1114027BD3ECF1f086bA0F9
|
docs/notebooks/plugins/meep/002_meep_sparameters.ipynb | ###Markdown
SparametersHere are some examples on how to extract Sparameters in Meep.FIXME: For a full Sparameters matrix we need to add a function that excites each port.- add monitors- run simulation- pull coefficients, and create proper ratios to get Sparameters. Monitors are recording Fourier Transform fields. Sparameter is a relationship of those parameters. Frequency domain approach at many different frequencies. Get eigenmode coefficients. - forward coefficient: how much waveguide forward mode - backward coefficient: how much waveguide forward mode
###Code
import matplotlib.pyplot as plt
import numpy as np
import gdsfactory as gf
import gdsfactory.simulation.gmeep as gm
gf.config.set_plot_options(show_subports=False, show_ports=False)
c = gf.components.straight(length=2)
cm = gm.add_monitors(component=c)
r = gm.get_transmission_2ports(cm, run=False, res=10)
cm
r.keys()
gm.plot2D(r)
sim = r['sim']
sim.plot2D()
import gdsfactory as gf
import matplotlib.pyplot as plt
import gdsfactory.simulation.gmeep as gm
component_name = 'mmi1x2'
c = gf.components.factory[component_name]()
c
sim = gm.get_simulation(c)
gm.plot2D(sim)
component_name = 'mmi1x2'
c = gf.components.factory[component_name]()
gm.add_monitors(c)
c
sim = gm.get_simulation(c)
gm.plot2D(sim)
gf.c.bend_euler()
component_name = 'bend_euler'
c = gf.components.factory[component_name]()
c.show()
sim = gm.get_simulation(c)
gm.plot2D(sim)
components = ['bend_euler', 'bend_s', 'coupler', 'coupler_ring', 'crossing', 'mmi1x2', 'mmi2x2', 'taper', 'straight']
for component_name in components:
c = gf.components.factory[component_name]()
print(c)
c.show()
plt.figure()
sim = gm.get_simulation(c)
gm.plot2D(sim)
r = gm.get_transmission_2ports(cm, run=True, res=10, is_3d=False)
r.keys()
plt.plot(r['wavelengths']*1e3, abs(r['s11']))
plt.ylabel('|S11|')
plt.xlabel('wavelength (nm)')
plt.plot(r['wavelengths']*1e3, abs(r['s21']))
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
print(f"Transmission = {np.mean(abs(r['s21']))*100:.2f} %")
print(f"Reflection = {np.mean(abs(r['s11']))*100:.2f} %")
###Output
_____no_output_____
###Markdown
Bend
###Code
import gdsfactory as gf
import matplotlib.pyplot as plt
import numpy as np
c = gf.c.bend_circular(radius=5)
margin = 0.5
cm = gm.add_monitors(component=c)
cm
cm.ports
r = gm.get_transmission_2ports(cm, res=10, run=False, is_3d=False)
r['sim'].plot2D()
r = gm.get_transmission_2ports(cm, run=True, res=30, is_3d=False)
plt.plot(r['wavelengths']*1e3, abs(r['s11']))
plt.ylabel('|S11|')
plt.xlabel('wavelength (nm)')
plt.figure()
plt.plot(r['wavelengths']*1e3, abs(r['s21']))
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
c = gf.c.bend_circular(radius=2)
margin = 0.5
cm = gm.add_monitors(component=c, port_margin=1)
cm
r_2um_2d= gm.get_transmission_2ports(cm, run=True, res=30, is_3d=False)
plt.plot(r_2um_2d['wavelengths']*1e3, abs(r_2um_2d['s21']))
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
###Output
_____no_output_____
###Markdown
Bend eulerAn euler bend provides less loss in the same
###Code
c = gf.c.bend_euler(radius=2)
margin = 0.5
cm = gm.add_monitors(component=c, port_margin=1)
cm
r_2um_2d= gm.get_transmission_2ports(cm, run=True, res=30, is_3d=False)
plt.plot(r_2um_2d['wavelengths']*1e3, abs(r_2um_2d['s21']))
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
plt.plot(r_2um_2d['wavelengths']*1e3, abs(r_2um_2d['s21']))
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
###Output
_____no_output_____
###Markdown
3D sims3D simulations take longer
###Code
r_2um_3d = gm.get_transmission_2ports(cm, run=False, res=10, is_3d=True)
gm.plot_xsection(r_2um_3d['sim'], center=(-2, 0, 0))
r_2um_3d= gm.get_transmission_2ports(cm, run=True, res=10, is_3d=True)
plt.plot(r_2um_2d['wavelengths']*1e3, abs(r_2um_2d['s21']), label='2D')
plt.plot(r_2um_3d['wavelengths']*1e3, abs(r_2um_3d['s21']), label='3D')
plt.ylabel('|S21|')
plt.xlabel('wavelength (nm)')
plt.legend()
plt.plot(r_2um_2d['wavelengths']*1e3, abs(r_2um_2d['s11']), label='2D')
plt.plot(r_2um_3d['wavelengths']*1e3, abs(r_2um_3d['s11']), label='3D')
plt.ylabel('|S11|')
plt.xlabel('wavelength (nm)')
plt.legend()
###Output
_____no_output_____ |
compare_different_parameter.ipynb | ###Markdown
Choosing the Activation Function
###Code
activation_func_use = activation_func[0]
loss_func_use = loss_func[0]
optimizer_scheme_use = optimizer_scheme[0]
list_activation = []
for i in range(len(activation_func)):
activation_func_use = activation_func[i]
print('********For the activation function', activation_func[i], '********')
model = Sequential()
model.add(Dense(64, input_dim=n, activation=activation_func_use))
model.add(Dense(16, activation=activation_func_use))
model.add(Dense(1, activation=activation_func_use))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, )
model.compile(loss=loss_func_use, optimizer=optimizer_scheme_use, metrics=['binary_accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=100,verbose =0)
loss_and_metrics = model.evaluate(X_test, y_test, batch_size=128)
print('For attack1: The loss is', loss_and_metrics[0], ', the accuracy is', loss_and_metrics[1])
list_activation.append([activation_func_use,loss_and_metrics[1]])
for i in range(len(list_activation)):
plt.scatter(i,list_activation[i][1],label = list_activation[i][0])
plt.legend(loc='best')
plt.xlabel("active function")
plt.ylabel("accurency")
plt.show()
###Output
_____no_output_____
###Markdown
Choosing the Loss function
###Code
activation_func_use = activation_func[0]
loss_func_use = loss_func[0]
optimizer_scheme_use = optimizer_scheme[0]
list_loss =[]
for i in range(len(loss_func)):
loss_func_use = loss_func[i]
print('********For the loss function', loss_func[i], '********')
model = Sequential()
model.add(Dense(64, input_dim=n, activation=activation_func_use))
model.add(Dense(16, activation=activation_func_use))
model.add(Dense(1, activation=activation_func_use))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, )
model.compile(loss=loss_func_use, optimizer=optimizer_scheme_use, metrics=['binary_accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=100,verbose=0)
loss_and_metrics = model.evaluate(X_test, y_test, batch_size=128)
print('For attack1: The loss is', loss_and_metrics[0], ', the accuracy is', loss_and_metrics[1])
list_loss.append([loss_func_use,loss_and_metrics[1]])
for i in range(len(list_loss)):
plt.scatter(i,list_loss[i][1],label = list_loss[i][0])
plt.legend(loc='best')
plt.xlabel("loss function")
plt.ylabel("accurency")
plt.show()
###Output
_____no_output_____
###Markdown
Choosing the proper Optimizer
###Code
activation_func_use = activation_func[0]
loss_func_use = loss_func[0]
optimizer_scheme_use = optimizer_scheme[0]
list_optimizer = []
for i in range(len(optimizer_scheme)):
optimizer_scheme_use = optimizer_scheme[i]
print('***********For the optimizer_scheme', optimizer_scheme[i], '***********')
model = Sequential()
model.add(Dense(64, input_dim=n, activation=activation_func_use))
model.add(Dense(16, activation=activation_func_use))
model.add(Dense(1, activation=activation_func_use))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, )
model.compile(loss=loss_func_use, optimizer=optimizer_scheme_use, metrics=['binary_accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=100,verbose = 0)
loss_and_metrics = model.evaluate(X_test, y_test, batch_size=128)
print('For attack1: The loss is', loss_and_metrics[0], ', the accuracy is', loss_and_metrics[1])
list_optimizer.append([optimizer_scheme_use,loss_and_metrics[1]])
for i in range(len(list_optimizer)):
plt.scatter(i,list_optimizer[i][1],label = list_optimizer[i][0])
plt.legend(loc='best')
plt.xlabel("optimizer function")
plt.ylabel("accurency")
plt.show()
###Output
_____no_output_____ |
Tasking/tasking_intro_python_training.ipynb | ###Markdown
Request API data with Python Index1. Create an order2. Monitor an order3. Download successful captures
###Code
import os
import sys
import requests
import pytz
from time import sleep
from datetime import datetime, timedelta
API_KEY = "YOUR API KEY IN HERE"
PLANET_API_HOST = "api.planet.com"
###Output
_____no_output_____
###Markdown
1 | Create an order
###Code
PARAMETERS = {
'name': 'Satellite Control Demo',
'geometry': {
"type": "Point",
"coordinates": [26.039818165394035, 50.524749755859375] #long, lat
}
}
tomorrow = datetime.now(pytz.utc) + timedelta(days=1)
one_week_later = tomorrow + timedelta(days=7)
OPTIONAL_PARAMETERS = {
'start_time': tomorrow.isoformat(),
'end_time': one_week_later.isoformat()
}
# Optionally use optional parameters
PARAMETERS.update(OPTIONAL_PARAMETERS)
import json
def create_order(parameters):
if not API_KEY:
print('Please set your planet api key in your environment as PLANET_API_KEY')
headers = {
'Authorization': f'api-key {API_KEY}',
'Content-Type': 'application/json'
}
response = requests.post(f'https://{PLANET_API_HOST}/tasking/v2/orders/', json=parameters, headers=headers)
if response.status_code == 403:
print('Your API KEY is valid, but you are not authorized.')
elif response.status_code == 401:
print('Your API KEY is incorrect')
elif response.status_code == 201:
print('Your order was created successfully with ID: {}'.format(response.json()["id"]))
else:
print(f'Received status code {response.status_code} from the API. Please contact support.')
create_order(PARAMETERS)
###Output
_____no_output_____
###Markdown
2 | Monitor an order
###Code
def monitor_order(order_id):
if not API_KEY:
print('Please set your planet api key in your environment as PLANET_API_KEY')
headers = {
'Authorization': f'api-key {API_KEY}',
'Content-Type': 'application/json'
}
response = requests.get(f'https://{PLANET_API_HOST}/tasking/v2/orders/{order_id}/', headers=headers)
status = response.status_code
if response.status_code == 403:
print('Your API KEY is valid, but you are not authorized to view this order.')
elif response.status_code == 401:
print('Your API KEY is incorrect')
elif response.status_code == 404:
print(f'Your order ({ORDER_ID}) does not exist')
elif response.status_code != 200:
print(f'Received status code {response.status_code} from the API. Please contact support.')
else:
order = response.json()
print(f'Your order is {order["status"]} with {order["capture_status_published_count"]} published captures '
f'and {order["capture_assessment_success_count"]} successful captures')
return status
# Use ID of your previous order
ORDER_ID = "YOUR ORDER ID IN HERE"
monitor_order(ORDER_ID)
###Output
_____no_output_____
###Markdown
3 | Download successful captures
###Code
def download_successful_captures(order_id):
if not API_KEY:
print('Please set your planet api key in your environment as PLANET_API_KEY')
headers = {
'Authorization': f'api-key {API_KEY}',
'Content-Type': 'application/json'
}
status = monitor_order(order_id)
if status == 200:
response = requests.get(f'https://{PLANET_API_HOST}/tasking/v2/captures/?order_id={order_id}&fulfilling=true',
headers=headers)
if response.status_code == 403:
print('Your API KEY is valid, but you are not authorized to view this order.')
elif response.status_code == 401:
print('Your API KEY is incorrect')
elif response.status_code != 200:
print(f'Received status code {response.status_code} from the API. Please contact support.')
else:
captures = response.json()['results']
strip_ids = [capture['strip_id'] for capture in captures]
search_data = {
"filter": {
"config": strip_ids,
"field_name": "strip_id",
"type": "StringInFilter"
},
"item_types": ["SkySatCollect"]
}
data_api_response = requests.post(f'https://{PLANET_API_HOST}/data/v1/quick-search', headers=headers,
json=search_data)
asset_urls = [feature['_links']['assets'] for feature in data_api_response.json()['features']]
# Activate the ortho_visual asset(s)
ortho_visual_urls = []
for asset_url in asset_urls:
assets = requests.get(asset_url, headers=headers).json()
activation_url = assets['ortho_visual']['_links']['activate']
requests.get(activation_url, headers=headers)
ortho_visual_urls.append(assets['ortho_visual']['_links']['_self'])
# Wait for activation and print
for ortho_visual_url in ortho_visual_urls:
ortho_visual = requests.get(ortho_visual_url, headers=headers).json()
while 'location' not in ortho_visual:
sleep(10)
print('Waiting 10 seconds for asset to unlock...')
ortho_visual = requests.get(ortho_visual_url, headers=headers).json()
print(f'Open the following link in a browser or download it to a file:\n{ortho_visual["location"]}')
download_successful_captures(ORDER_ID)
###Output
_____no_output_____ |
notebooks/ml_topic_analysis_exploration.ipynb | ###Markdown
Prototype pipeline for the analysis of ML arxiv dataWe query arxiv to get papers, and then run them against Crossref event data to find social media discussion and Microsoft Academic Knowledge to find institutional affiliations```Query Arxiv -> Paper repository -> Analysis -> Topic model -> Classify | | | |----> Social network analysis of researchers | |----> Geocoding of institutions (via GRID?) | Extract author data from Google Scholar ----> Geocode institution via Google Places API? | | Enrich paper data with MAK(?) |---> Spatial and network analysis | Obtain Crossref Event data ``` Preamble
###Code
%matplotlib inline
#Some imports
import time
#import xml.etree.ElementTree as etree
from lxml import etree
import feedparser
#Imports
#Key imports are loaded from my profile (see standard_imports.py in src folder).
#Paths
#Paths
top = os.path.dirname(os.getcwd())
#External data (to download the GRID database)
ext_data = os.path.join(top,'data/external')
#Interim data (to place seed etc)
int_data = os.path.join(top,'data/interim')
#Figures
fig_path = os.path.join(top,'reports')
#Models
mod_path = os.path.join(top,'models')
#Get date for saving files
today = datetime.datetime.today()
today_str = "_".join([str(x) for x in [today.day,today.month,today.year]])
#Functions
###Output
_____no_output_____
###Markdown
1. Get Arxiv data about machine learning* Write a APi querier and extract papers with the terms machine learning or artificial intelligence. Get 2000 results... and play nice!
###Code
class Arxiv_querier():
'''
This class takes as an input a query and the number of results, and returns all the parsed results.
Includes routines to deal with multiple pages of results.
'''
def __init__(self,base_url="http://export.arxiv.org/api/query?"):
'''
Initialise
'''
self.base_url = base_url
def query(self,query_string,max_results=100,wait_time=3):
'''
Query the base url
'''
#Attribute query string
#Load base URL
base_url = self.base_url
#Prepare query string
processed_query = re.sub(' ','+',query_string)
self.query_string="_".join(query_string.split(" "))
start=0
pages = 0
#Run the query and store results for as long as the number of results is bigger than the max results
keep_running = True
result_store = []
while keep_running==True:
pages +=1
print(pages)
#Query url (NB the start arg, which will change as we go through different
#pages)
query_url = base_url+'search_query=all:{q}&start={s}&max_results={max_res}'.format(
q=processed_query,s=start,max_res=max_results)
#Download
source = requests.get(query_url)
#Parse the xml and get the entries (papers)
parsed = feedparser.parse(source.content)
#Extract entries
entries = parsed['entries']
#If the number of entries is bigger than the maximum number of results
#this means we need to go to another page. We do that by offseting the
#start with max results
result_store.append(entries)
if len(entries)==max_results:
start+=max_results
#If we have less than max results this means we have run out of
#results and we toggle the keep_running switch off.
if len(entries)<max_results:
keep_running=False
time.sleep(wait_time)
#Save results in a flat list
self.entry_results = [x for el in result_store for x in el]
def extract_data(self):
'''
Here we extract data from the entries
'''
#Load entries
entries = self.entry_results
#Create df
output = pd.concat([pd.DataFrame({
'query':self.query_string,
'id':x['id'],
'link':x['link'],
'title':x['title'],
'authors':", ".join([el['name'] for el in x['authors']]),
'summary':x['summary'],
'updated':x['updated'],
'published':x['published'],
'category':x['arxiv_primary_category']['term'],
'pdf':str([el['href'] for el in x['links'] if el['type']=='application/pdf'][0]
)},index=[0]) for x in entries]).reset_index(drop=True)
output['year_published'] = [x.split("-")[0] for x in output['published']]
self.output_df = output
query_terms = ['artificial intelligence','machine learning','deep learning']
#There are some inconsistencies in the number of results so we run the query three times for each
#term and remove duplicated results
def extract_arxiv_data(term,max_results=1000,wait_time=10, tests=3):
'''
This function initialises the Arxiv_querier class, extracts the data and outputs it
'''
print(term)
collected = []
#We collect the data thrice
for i in np.arange(tests):
print('run'+ ' ' +str(i))
initialised = Arxiv_querier()
initialised.query(term,max_results,wait_time)
initialised.extract_data()
out = initialised.output_df
collected.append(out)
#We concatenate the dfs and remove the duplicates.
output = pd.concat(collected)
output_no_dupes = output.drop_duplicates('id')
#Return both
return([output,output_no_dupes])
arxiv_ai_results_three = [extract_arxiv_data(term=q) for q in query_terms]
all_papers = pd.concat([x[1] for x in arxiv_ai_results_three]).drop_duplicates('id').reset_index(drop=True)
print(all_papers.shape)
all_papers.head()
all_papers.to_csv(int_data+'/{today}_ai_papers.csv'.format(today=today_str),index=False)
###Output
_____no_output_____
###Markdown
2. Some exploratory analysis
###Code
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize, RegexpTokenizer, PunktSentenceTokenizer
from nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer
import scipy
import ast
import string as st
from bs4 import BeautifulSoup
import gensim
from gensim.models.coherencemodel import CoherenceModel
from sklearn.feature_extraction.text import TfidfVectorizer
from itertools import product
stopwords_c = stopwords.words('english')
stemmer = PorterStemmer()
lemmatizer= WordNetLemmatizer()
#Read papers
all_papers = pd.read_csv(int_data+'/19_8_2017_ai_papers.csv'.format(today=today_str))
#Let's begin by looking at years
#When where they published?
#Year distribution
year_pubs = all_papers['year_published'].value_counts()
year_pubs.index = [int(x) for x in year_pubs.index]
fig,ax = plt.subplots(figsize=(10,5))
year_pubs_sorted = year_pubs[sorted(year_pubs.index)]
year_pubs_subset = year_pubs_sorted[year_pubs_sorted.index>1991]
ax.plot(np.arange(1993,2018),year_pubs_subset.cumsum(),color='red')
ax.bar(np.arange(1993,2018),year_pubs_subset)
ax.hlines(xmin=1993,xmax=2017,y=[10000,20000,30000,40000],colors='green',linestyles='dashed',alpha=0.7)
ax.set_title("Papers on AI, ML and DL, total per year (bar) and cumulative (red)",size=14)
#What are the categories of the papers? Are we capturing what we think we are capturing
#Top 20
all_papers['category'].value_counts()[:20]
###Output
_____no_output_____
###Markdown
See here for abbreviations of categories.In a nutshell, AI is AI, LG is 'Learning', CV is 'Computer Vision', 'CL' is 'computation and language' and NE is 'Neural and Evolutionary computing'. SL.ML is kind of self-explanatory. We seem to be picking up the main things
###Code
#NB do we want to remove hyphens?
punct = re.sub('-','',st.punctuation)
def comp_sentence(sentence):
'''
Takes a sentence and pre-processes it.
The output is the sentence as a bag of words
'''
#Remove line breaks and hyphens
sentence = re.sub('\n',' ',sentence)
sentence = re.sub('-',' ',sentence)
#Lowercase and tokenise
text_lowered = [x.lower() for x in sentence.split(" ")]
#Remove signs and digits
text_no_signs_digits = ["".join([x for x in el if x not in punct+st.digits]) for
el in text_lowered]
#Remove stop words, single letters
text_stopped = [w for w in text_no_signs_digits if w not in stopwords_c and
len(w)>1]
#Stem
text_lemmatised = [lemmatizer.lemmatize(w) for w in text_stopped]
#Output
return(text_lemmatised)
#Process text
clean_corpus = [comp_sentence(x) for x in all_papers['summary']]
#We remove rate words
word_freqs = pd.Series([x for el in clean_corpus for x in el]).value_counts()
word_freqs[:30]
rare_words = word_freqs.index[word_freqs<=2]
rare_words[:10]
###Output
_____no_output_____
###Markdown
Lots of the rare words seem to be typos and so forth. We remove them
###Code
#Removing rare words
clean_corpus_no_rare = [[x for x in el if x not in rare_words] for el in clean_corpus]
###Output
_____no_output_____
###Markdown
2 NLP (topic modelling & word embeddings)
###Code
#Identify 2-grams (frequent in science!)
bigram_transformer = gensim.models.Phrases(clean_corpus_no_rare)
#Train the model on the corpus
#Let's do a bit of grid search
#model = gensim.models.Word2Vec(bigram_transformer[clean_corpus], size=360, window=15, min_count=2, iter=20)
model.most_similar('ai_safety')
model.most_similar('complexity')
model.most_similar('github')
#Create 3 different dictionaries and bows depending on word sizes
def remove_words_below_threshold(corpus,threshold):
'''
Takes a list of terms and removes any which are below a threshold of occurrences
'''
#Produce token frequencies
token_frequencies = pd.Series([x for el in corpus for x in el]).value_counts()
#Identify tokens to drop (below a threshold)
tokens_to_drop = token_frequencies.index[token_frequencies<=threshold]
#Processed corpus
processed_corpus = [[x for x in el if x not in tokens_to_drop] for el in corpus]
#Dictionary
dictionary = gensim.corpora.Dictionary(processed_corpus)
corpus_bow = [dictionary.doc2bow(x) for x in processed_corpus]
return([dictionary,corpus_bow,processed_corpus])
#Initial model run to see what comes out.
#Transform corpus to bigrams
transformed_corpus = bigram_transformer[clean_corpus]
corpora_to_process = {str(x):remove_words_below_threshold(transformed_corpus,x) for x in [1,2,5,10]}
#Need to turn this into a function.
#Topic modelling
#Parameters for Grid search.
lda_params = list(product([100,200,300],[2,5]))
#Model container
lda_models = []
for x in lda_params:
#Print stage
print('{x}_{y}'.format(x=x[0],y=x[1]))
#Load corpus and dict
dictionary = corpora_to_process[str(x[1])][0]
corpus_bow = corpora_to_process[str(x[1])][1]
corpus = corpora_to_process[str(x[1])][2]
print('training')
#Train model
mod = gensim.models.LdaModel(corpus_bow,num_topics=x[0],id2word=dictionary,
passes=10,iterations=50)
print('coherence')
#Extract coherence
cm = CoherenceModel(mod,texts=corpus,
dictionary=dictionary,coherence='u_mass')
#Get value
try:
coherence_value = cm.get_coherence()
except:
print('coherence_error')
coherence_value='error'
lda_models.append([x,mod,[coherence_value,cm]])
with open(mod_path+'/{t}_ai_topic_models.p'.format(t=today_str),'wb') as outfile:
pickle.dump(lda_models,outfile)
#Visualiase model performance
model_eval = pd.DataFrame([[x[0][0],x[0][1],x[2][0]] for x in lda_models],columns=['topics','word_lim','coherence'])
fig,ax = plt.subplots(figsize=(10,5))
cols = ['red','green','blue']
legs = []
for num,x in enumerate(set(model_eval['word_lim'])):
subset = model_eval.loc[[z == x for z in model_eval['word_lim']],:]
ax.plot(subset.loc[:,'topics'],subset.loc[:,'coherence'],color=cols[num-1])
legs.append([cols[num-1],x])
ax.legend(labels=[x[1] for x in legs],title='Min word count')
ax.set_title('Model performance with different parameters')
with open(mod_path+'/19_8_2017_ai_topic_models.p','rb') as infile:
lda_models = pickle.load(infile)
check_model= lda_models[1][1]
#Explore topics via LDAvis
import pyLDAvis.gensim
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(
#Insert best model/corpus/topics here
check_model,
corpora_to_process[str(5)][1],
corpora_to_process[str(5)][0])
#Can we extract the relevant terms for the topics as in Sievert and Shirley in order to name them?
#First - create a matrix with top 30 terms per topic
top_30_kws = [check_model.get_topic_terms(topicid=n,topn=1000) for n in np.arange(0,100)]
#Keyword df where the columns are tokens and the rows are topics
top_30_kws_df = pd.concat([pd.DataFrame([x[1] for x in el],
index=[x[0] for x in el]) for el in top_30_kws],
axis=1).fillna(0).T.reset_index(drop=True)
#This is the dictionary
selected_dictionary = corpora_to_process[str(5)][0]
#Total number of terms in the document
total_terms = np.sum([vals for vals in selected_dictionary.dfs.values()])
#Appearances of different terms
document_freqs = pd.Series([v for v in selected_dictionary.dfs.values()],
index=[k for k in selected_dictionary.dfs.keys()])[top_30_kws_df.columns]/total_terms
#Normalise the terms (divide the vector of probabilities of each keywords in each topic by the totals)
top_30_kws_normalised = top_30_kws_df.apply(lambda x: x/document_freqs,axis=0)
#Now we want to extract, for each topic, the relevance score.
def relevance_score(prob_in_topic,prob_in_corpus,id2word_lookup,lambda_par = 0.6):
'''
Combines the probabilities using the definition in Sievert and Shirley and returns the top 5 named
#terms for each topic
'''
#Create dataframe
combined = pd.concat([prob_in_topic,prob_in_corpus],axis=1)
combined.columns=['prob_in_topic','prob_in_corpus']
#Create relevance metric
combined['relevance'] = lambda_par*combined['prob_in_topic'] + (1-lambda_par)*combined['prob_in_corpus']
#Top words
top_ids = list(combined.sort_values('relevance',ascending=False).index[:5])
#Top words
top_words = "_".join([id2word_lookup[this_id] for this_id in top_ids])
return(top_words)
relevance_scores = [relevance_score(top_30_kws_df.iloc[n,:],
top_30_kws_normalised.iloc[n,:],
dictionary.id2token,lambda_par=0.6) for n in np.arange(len(top_30_kws_df))]
%%time
#Create a df with the topic predictions.
paper_preds = check_model[corpora_to_process[str(5)][1]]
paper_topics_df = pd.concat([pd.DataFrame([x[1] for x in el],index=[x[0] for x in el]) for el in paper_preds],
axis=1).T
#Replace NAs with zeros and drop pointless index
paper_topics_df.fillna(value=0,inplace=True)
paper_topics_df.reset_index(drop=True,inplace=True)
paper_topics_df.columns = relevance_scores
paper_topics_df.to_csv(int_data+'/{t}_paper_topic_mix.csv'.format(t=today_str),index=False)
#paper_topics_df = pd.read_csv(int_data+'/{t}_paper_topic_mix.csv')
#Quick test of Deep learning papers
#These are papers with a topic that seems to capture deep learning
dl_papers = [x>0.05 for x in paper_topics_df['network_training_model_deep_deep_learning']]
dl_papers_metadata = pd.concat([pd.Series(dl_papers),all_papers],axis=1)
paper_frequencies = pd.crosstab(dl_papers_metadata.year_published,dl_papers_metadata[0])
paper_frequencies.columns=['no_dl','dl']
fig,ax = plt.subplots(figsize=(10,5))
paper_frequencies.plot.bar(stacked=True,ax=ax)
ax.set_title('Number of papers in the DL \'topic\'')
ax.legend(labels=['Not ANN/DL related','NN/DL topic >0.05'])
###Output
_____no_output_____
###Markdown
Some of this is interesting. Doesn't seem to be picking up the policy related terms (safety, discrimination)Next stages - focus on policy related terms. Can we look for papers in keyword dictionaries identified through the word embeddings? Obtain Google Scholar data
###Code
#How many authors are there in the data? Can we collect all their institutions from Google Scholar
paper_authors = pd.Series([x for el in all_papers['authors'] for x in el.split(", ")])
paper_authors_unique = paper_authors.drop_duplicates()
len(paper_authors_unique)
###Output
_____no_output_____
###Markdown
We have 68,000 authors. It might take a while to get their data from Google Scholar
###Code
#Top authors and frequencies
authors_freq = paper_authors.value_counts()
fig,ax=plt.subplots(figsize=(10,3))
ax.hist(authors_freq,bins=30)
ax.set_title('Distribution of publications')
#Pretty skewed distribution!
print(authors_freq.describe())
np.sum(authors_freq>2)
###Output
count 68083.000000
mean 1.859701
std 2.795677
min 1.000000
25% 1.000000
50% 1.000000
75% 2.000000
max 150.000000
dtype: float64
###Markdown
Less than 10,000 authors with 3+ papers in the data
###Code
get_scholar_data(
%%time
#Test run
import scholarly
@ratelim.patient(max_calls=30,time_interval=60)
def get_scholar_data(scholarly_object):
''''''
try:
scholarly_object = next(scholarly_object)
metadata = {}
metadata['name']=scholarly_object.name
metadata['affiliation'] = scholarly_object.affiliation
metadata['interests'] = scholarly_object.interests
return(metadata)
except:
return('nothing')
#Extract information from each query (it is a generator)
#Get data
#ml_author_gscholar=[]
for num,x in enumerate(paper_authors_unique[1484:]):
if num % 100 == 0:
print(str(num)+":"+x)
result = get_scholar_data(scholarly.search_author(x))
ml_author_gscholar.append(result)
len(ml_author_gscholar)
###Output
_____no_output_____ |
notebooks/Word_Embeddings.ipynb | ###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Word Embeddings
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import gzip
import os
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
pos_path = tf.keras.utils.get_file(
'rotten_tomatoes_positive_reviews.txt',
url + 'rotten_tomatoes_positive_reviews.txt.gz',
extract=True)
neg_path = tf.keras.utils.get_file(
'rotten_tomatoes_negative_reviews.txt',
url + 'rotten_tomatoes_negative_reviews.txt.gz',
extract=True)
with gzip.open(pos_path) as fin:
pos_rev = fin.readlines()
pos_rev = [r.decode('utf-8') for r in pos_rev]
with gzip.open(neg_path) as fin:
neg_rev = fin.readlines()
neg_rev = [r.decode('utf-8') for r in neg_rev]
docs = np.array(pos_rev + neg_rev)
y = np.array([1]*len(pos_rev) + [0]*len(neg_rev))
docs_train, docs_test, y_train, y_test = train_test_split(docs, y, test_size=0.15, random_state=0)
###Output
_____no_output_____
###Markdown
Sequence encoding with Keras Tokenizer
###Code
max_features = 20000
tokenizer = Tokenizer(
num_words=max_features,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`\'{|}~\t\n',
lower=True,
split=" ",
char_level=False,
oov_token=None,
document_count=0,
)
tokenizer.fit_on_texts(docs_train)
seq_train = tokenizer.texts_to_sequences(docs_train)
seq_test =tokenizer.texts_to_sequences(docs_test)
seq_train[0]
docs_train[0]
' '.join([tokenizer.index_word[i] for i in seq_train[0]])
max([len(s) for s in seq_train])
max([len(s) for s in seq_test])
maxlen=58
X_train = pad_sequences(seq_train, maxlen=maxlen)
X_test = pad_sequences(seq_test, maxlen=maxlen)
X_train.max()
X_test.max()
###Output
_____no_output_____
###Markdown
Bag of word model with Embeddings
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, GlobalAveragePooling1D, Dense, Dropout
embedding_dim=16
model = Sequential([
Embedding(max_features,
embedding_dim,
input_length=maxlen,
name='bow_embeddings'),
Dropout(0.3),
GlobalAveragePooling1D(),
Dense(24, activation='relu'),
Dense(1, activation='sigmoid')
])
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
h = model.fit(
X_train, y_train,
batch_size=128,
epochs=4,
validation_split=0.1)
pd.DataFrame(h.history).plot();
###Output
_____no_output_____
###Markdown
Exercise 1The model above is still a bag of words model, despite the use of embeddings. Let's improve it using 1D convolutional layers.- Define a new `Sequential` model that uses `Conv1D` layers after the `Embedding` layer- Start with the simplest model possible and gradually increase the complexity- Train the model as above and compare the performance of this model with the previous oneYour code will look like:```pythonmodel = Sequential([ Embedding( YOUR CODE HERE YOUR CODE HERE])```
###Code
from tensorflow.keras.layers import Conv1D, GlobalMaxPooling1D, MaxPooling1D, Flatten
###Output
_____no_output_____
###Markdown
Gensim and pre-trained embeddings
###Code
import gensim
import gensim.downloader as api
info = api.info()
info.keys()
info['models'].keys()
glove_model = api.load('glove-wiki-gigaword-50')
glove_model.most_similar(positive=['good'], topn=5)
glove_model.most_similar(positive=['two'], topn=5)
glove_model.most_similar(positive=['king', 'woman'],
negative=['man'], topn=3)
glove_size = len(glove_model['cat'])
glove_size
glove_weights = np.zeros(shape=(max_features, glove_size))
for i in range(1, max_features):
w = tokenizer.index_word[i]
try:
v = glove_model[w]
glove_weights[i] = v
except:
pass
plt.subplot(211)
plt.plot(glove_model['two'])
plt.plot(glove_model['three'])
plt.plot(glove_model['four'])
plt.title("A few numbers")
plt.ylim(-2, 5)
plt.subplot(212)
plt.plot(glove_model['cat'])
plt.plot(glove_model['dog'])
plt.plot(glove_model['rabbit'])
plt.title("A few animals")
plt.ylim(-2, 5)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Exercise 2Let's use the Glove pre-trained embeddings as our input layer.- Modify the Embedding layer in your model using a `Constant` initializer that sets the weights to be `glove_weight`- Adapt the `output_dim` to correspond to the size of glove embeddings- Set the Embedding layer to be frozen (`trainable=False`)- Re-train the model and compare the performanceYour code will look like:```pythonmodel = Sequential([ Embedding( YOUR CODE HERE YOUR CODE HERE])```
###Code
from tensorflow.keras.initializers import Constant
###Output
_____no_output_____
###Markdown
Copyright 2020 Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Word Embeddings
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import gzip
import os
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
pos_path = tf.keras.utils.get_file(
'rotten_tomatoes_positive_reviews.txt',
url + 'rotten_tomatoes_positive_reviews.txt.gz',
extract=True)
neg_path = tf.keras.utils.get_file(
'rotten_tomatoes_negative_reviews.txt',
url + 'rotten_tomatoes_negative_reviews.txt.gz',
extract=True)
with gzip.open(pos_path) as fin:
pos_rev = fin.readlines()
pos_rev = [r.decode('utf-8') for r in pos_rev]
with gzip.open(neg_path) as fin:
neg_rev = fin.readlines()
neg_rev = [r.decode('utf-8') for r in neg_rev]
docs = np.array(pos_rev + neg_rev)
y = np.array([1]*len(pos_rev) + [0]*len(neg_rev))
docs_train, docs_test, y_train, y_test = train_test_split(docs, y, test_size=0.15, random_state=0)
###Output
_____no_output_____
###Markdown
Sequence encoding with Keras Tokenizer
###Code
max_features = 20000
tokenizer = Tokenizer(
num_words=max_features,
filters='!"#$%&()*+,-./:;<=>?@[\\]^_`\'{|}~\t\n',
lower=True,
split=" ",
char_level=False,
oov_token=None,
document_count=0,
)
tokenizer.fit_on_texts(docs_train)
seq_train = tokenizer.texts_to_sequences(docs_train)
seq_test =tokenizer.texts_to_sequences(docs_test)
seq_train[0]
docs_train[0]
' '.join([tokenizer.index_word[i] for i in seq_train[0]])
max([len(s) for s in seq_train])
max([len(s) for s in seq_test])
maxlen=58
X_train = pad_sequences(seq_train, maxlen=maxlen)
X_test = pad_sequences(seq_test, maxlen=maxlen)
X_train.max()
X_test.max()
###Output
_____no_output_____
###Markdown
Bag of word model with Embeddings
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, GlobalAveragePooling1D, Dense, Dropout
embedding_dim=16
model = Sequential([
Embedding(max_features,
embedding_dim,
input_length=maxlen,
name='bow_embeddings'),
Dropout(0.3),
GlobalAveragePooling1D(),
Dense(24, activation='relu'),
Dense(1, activation='sigmoid')
])
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
h = model.fit(
X_train, y_train,
batch_size=128,
epochs=4,
validation_split=0.1)
pd.DataFrame(h.history).plot();
###Output
_____no_output_____
###Markdown
Exercise 1The model above is still a bag of words model, despite the use of embeddings. Let's improve it using 1D convolutional layers.- Define a new `Sequential` model that uses `Conv1D` layers after the `Embedding` layer- Start with the simplest model possible and gradually increase the complexity- Train the model as above and compare the performance of this model with the previous oneYour code will look like:```pythonmodel = Sequential([ Embedding( YOUR CODE HERE YOUR CODE HERE])```
###Code
from tensorflow.keras.layers import Conv1D, GlobalMaxPooling1D, MaxPooling1D, Flatten
###Output
_____no_output_____
###Markdown
Gensim and pre-trained embeddings
###Code
import gensim
import gensim.downloader as api
info = api.info()
info.keys()
info['models'].keys()
glove_model = api.load('glove-wiki-gigaword-50')
glove_model.most_similar(positive=['good'], topn=5)
glove_model.most_similar(positive=['two'], topn=5)
glove_model.most_similar(positive=['king', 'woman'],
negative=['man'], topn=3)
glove_size = len(glove_model['cat'])
glove_size
glove_weights = np.zeros(shape=(max_features, glove_size))
for i in range(1, max_features):
w = tokenizer.index_word[i]
try:
v = glove_model[w]
glove_weights[i] = v
except:
pass
plt.subplot(211)
plt.plot(glove_model['two'])
plt.plot(glove_model['three'])
plt.plot(glove_model['four'])
plt.title("A few numbers")
plt.ylim(-2, 5)
plt.subplot(212)
plt.plot(glove_model['cat'])
plt.plot(glove_model['dog'])
plt.plot(glove_model['rabbit'])
plt.title("A few animals")
plt.ylim(-2, 5)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Exercise 2Let's use the Glove pre-trained embeddings as our input layer.- Modify the Embedding layer in your model using a `Constant` initializer that sets the weights to be `glove_weight`- Adapt the `output_dim` to correspond to the size of glove embeddings- Set the Embedding layer to be frozen (`trainable=False`)- Re-train the model and compare the performanceYour code will look like:```pythonmodel = Sequential([ Embedding( YOUR CODE HERE YOUR CODE HERE])```
###Code
from tensorflow.keras.initializers import Constant
###Output
_____no_output_____ |
Courses/Machine Learning Clustering & Retrieval/Latent Dirichlet Allocation for Text Data.ipynb | ###Markdown
Latent Dirichlet Allocation for Text DataIn this assignment you will* apply standard preprocessing techniques on Wikipedia text data* use Turi Create to fit a Latent Dirichlet allocation (LDA) model* explore and interpret the results, including topic keywords and topic assignments for documentsRecall that a major feature distinguishing the LDA model from our previously explored methods is the notion of *mixed membership*. Throughout the course so far, our models have assumed that each data point belongs to a single cluster. k-means determines membership simply by shortest distance to the cluster center, and Gaussian mixture models suppose that each data point is drawn from one of their component mixture distributions. In many cases, though, it is more realistic to think of data as genuinely belonging to more than one cluster or category - for example, if we have a model for text data that includes both "Politics" and "World News" categories, then an article about a recent meeting of the United Nations should have membership in both categories rather than being forced into just one.With this in mind, we will use Turi Create tools to fit an LDA model to a corpus of Wikipedia articles and examine the results to analyze the impact of a mixed membership approach. In particular, we want to identify the topics discovered by the model in terms of their most important words, and we want to use the model to predict the topic membership distribution for a given document. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Text Data PreprocessingWe'll start by importing our familiar Wikipedia dataset.
###Code
from __future__ import print_function # to conform python 2.x print to python 3.x
import turicreate
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# import wiki data
wiki = turicreate.SFrame('people_wiki.sframe/')
wiki
###Output
_____no_output_____
###Markdown
In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a _bag of words_, which ignores word ordering in the document but retains information on how many times each word appears. As we have seen in our previous encounters with text data, words such as 'the', 'a', or 'and' are by far the most frequent, but they appear so commonly in the English language that they tell us almost nothing about how similar or dissimilar two documents might be. Therefore, before we train our LDA model, we will preprocess the Wikipedia data in two steps: first, we will create a bag of words representation for each article, and then we will remove the common words that don't help us to distinguish between documents. For both of these tasks we can use pre-implemented tools from Turi Create:
###Code
wiki_docs = turicreate.text_analytics.count_words(wiki['text'])
wiki_docs = wiki_docs.dict_trim_by_keys(turicreate.text_analytics.stop_words(), exclude=True)
###Output
_____no_output_____
###Markdown
Model fitting and interpretationIn the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a Turi Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating the topic model using create() from Turi Create's topic_model module.Note: This may take several minutes to run.
###Code
topic_model = turicreate.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
###Output
_____no_output_____
###Markdown
Turi provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that Turi Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
###Code
topic_model
###Output
_____no_output_____
###Markdown
It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will be useful as a document classification system. This can be a challenging task in itself, particularly when the model that we use is complex. We will begin by outlining a sequence of objectives that will help us understand our model in detail. In particular, we will* get the top words in each topic and use these to identify topic themes* predict topic distributions for some example documents* compare the quality of LDA "nearest neighbors" to the NN output from the first assignment* understand the role of model hyperparameters alpha and gamma Load a fitted topic modelThe method used to fit the LDA model is a _randomized algorithm_, which means that it involves steps that are random; in this case, the randomness comes from Gibbs sampling, as discussed in the LDA video lectures. Because of these random steps, the algorithm will be expected to yield slighty different output for different runs on the same data - note that this is different from previously seen algorithms such as k-means or EM, which will always produce the same results given the same input and initialization.It is important to understand that variation in the results is a fundamental feature of randomized methods. However, in the context of this assignment this variation makes it difficult to evaluate the correctness of your analysis, so we will load and analyze a pre-trained model. We recommend that you spend some time exploring your own fitted topic model and compare our analysis of the pre-trained model to the same analysis applied to the model you trained above.
###Code
topic_model = turicreate.load_model('topic_models/lda_assignment_topic_model')
###Output
_____no_output_____
###Markdown
Identifying topic themes by top wordsWe'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do this, we must first recall what exactly a 'topic' is in the context of LDA. In the video lectures on LDA we learned that a topic is a probability distribution over words in the vocabulary; that is, each topic assigns a particular probability to every one of the unique words that appears in our data. Different topics will assign different probabilities to the same word: for instance, a topic that ends up describing science and technology articles might place more probability on the word 'university' than a topic that describes sports or politics. Looking at the highest probability words in each topic will thus give us a sense of its major themes. Ideally we would find that each topic is identifiable with some clear theme _and_ that all the topics are relatively distinct.We can use the Turi Create function get_topics() to view the top words (along with their associated probabilities) from each topic.__Quiz Question:__ Identify the top 3 most probable words for the first topic. **Quiz Question:** What is the sum of the probabilities assigned to the top 50 words in the 3rd topic?
###Code
print (topic_model.get_topics([0], num_words=3))
print (sum(topic_model.get_topics([2], num_words=50)['score']))
###Output
+-------+-----------+----------------------+
| topic | word | score |
+-------+-----------+----------------------+
| 0 | president | 0.008339770494721031 |
| 0 | business | 0.008230612437460937 |
| 0 | board | 0.007476947242117326 |
+-------+-----------+----------------------+
[3 rows x 3 columns]
0.18242098743820911
###Markdown
Let's look at the top 10 words for each topic to see if we can identify any themes:
###Code
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)]
###Output
_____no_output_____
###Markdown
We propose the following themes for each topic:- topic 0: Business- topic 1: Science and research- topic 2: International music- topic 3: Art and publishing- topic 4: Team sports- topic 5: Family and society- topic 6: Politics- topic 7: International athletics- topic 8: TV and film- topic 9: General musicWe'll save these themes for later:
###Code
themes = ['business',
'science and research',
'international music',
'art and publishing',
'team sports',
'family and society',
'politics',
'international athletics',
'TV and film',
'general music']
###Output
_____no_output_____
###Markdown
Measuring the importance of top wordsWe can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words.We'll do this with two visualizations of the weights for the top words in each topic: - the weights of the top 100 words, sorted by the size - the total weight of the top 10 words Here's a plot for the top 100 words by weight in each topic:
###Code
for i in range(10):
plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score'])
plt.xlabel('Word rank')
plt.ylabel('Probability')
plt.title('Probabilities of Top 100 Words in each Topic')
###Output
_____no_output_____
###Markdown
In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summary of our topic model that our vocabulary has 547462 words in total!Next we plot the total weight assigned by each topic to its top 10 words:
###Code
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)]
ind = np.arange(10)
width = 0.5
fig, ax = plt.subplots()
ax.bar(ind-(width/2),top_probs,width)
ax.set_xticks(ind)
plt.xlabel('Topic')
plt.ylabel('Probability')
plt.title('Total Probability of Top 10 Words in each Topic')
plt.xlim(-0.5,9.5)
plt.ylim(0,0.15)
plt.show()
###Output
_____no_output_____
###Markdown
Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a simple 10-word summary.Finally, we observe that some 'junk' words appear highly rated in some topics despite our efforts to remove unhelpful words before fitting the model; for example, the word 'born' appears as a top 10 word in three different topics, but it doesn't help us describe these topics at all. Topic distributions for some example documentsAs we noted in the introduction to this assignment, LDA allows for mixed membership, which means that each document can partially belong to several different topics. For each document, topic membership is expressed as a vector of weights that sum to one; the magnitude of each weight indicates the degree to which the document represents that particular topic.We'll explore this in our fitted model by looking at the topic distributions for a few example Wikipedia articles from our data set. We should find that these articles have the highest weights on the topics whose themes are most relevant to the subject of the article - for example, we'd expect an article on a politician to place relatively high weight on topics related to government, while an article about an athlete should place higher weight on topics related to sports or competition. Topic distributions for documents can be obtained using Turi Create's predict() function. Turi Create uses a collapsed Gibbs sampler similar to the one described in the video lectures, where only the word assignments variables are sampled. To get a document-specific topic proportion vector post-facto, predict() draws this vector from the conditional distribution given the sampled word assignments in the document. Notice that, since these are draws from a _distribution_ over topics that the model has learned, we will get slightly different predictions each time we call this function on a document - we can see this below, where we predict the topic distribution for the article on Barack Obama:
###Code
obama = turicreate.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]])
pred1 = topic_model.predict(obama, output_type='probability')
pred2 = topic_model.predict(obama, output_type='probability')
print(turicreate.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
###Output
+--------------------------+---------------------------+-------------------------+
| predictions (first draw) | predictions (second draw) | topics |
+--------------------------+---------------------------+-------------------------+
| 0.1424731182795699 | 0.0967741935483871 | business |
| 0.056451612903225805 | 0.051075268817204304 | science and research |
| 0.02956989247311828 | 0.03763440860215054 | international music |
| 0.021505376344086023 | 0.03225806451612903 | art and publishing |
| 0.03494623655913978 | 0.04032258064516129 | team sports |
| 0.05913978494623656 | 0.051075268817204304 | family and society |
| 0.5483870967741935 | 0.5725806451612904 | politics |
| 0.06451612903225806 | 0.06989247311827956 | international athletics |
| 0.021505376344086023 | 0.016129032258064516 | TV and film |
| 0.021505376344086023 | 0.03225806451612903 | general music |
+--------------------------+---------------------------+-------------------------+
[10 rows x 3 columns]
###Markdown
To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
###Code
def average_predictions(model, test_document, num_trials=100):
avg_preds = np.zeros((model.num_topics))
for i in range(num_trials):
avg_preds += model.predict(test_document, output_type='probability')[0]
avg_preds = avg_preds/num_trials
result = turicreate.SFrame({'topics':themes, 'average predictions':avg_preds})
result = result.sort('average predictions', ascending=False)
return result
print(average_predictions(topic_model, obama, 100))
###Output
+----------------------+-------------------------+
| average predictions | topics |
+----------------------+-------------------------+
| 0.566747311827957 | politics |
| 0.10077956989247314 | business |
| 0.07569892473118281 | family and society |
| 0.06413978494623658 | international athletics |
| 0.06271505376344087 | science and research |
| 0.0348924731182796 | team sports |
| 0.029892473118279583 | international music |
| 0.025026881720430105 | art and publishing |
| 0.02080645161290322 | general music |
| 0.019301075268817188 | TV and film |
+----------------------+-------------------------+
[10 rows x 2 columns]
###Markdown
__Quiz Question:__ What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions.
###Code
print (average_predictions(topic_model,
turicreate.SArray([wiki_docs[int(
np.where(wiki['name']=='George W. Bush')[0])]]),
100))
###Output
+----------------------+-------------------------+
| average predictions | topics |
+----------------------+-------------------------+
| 0.4639181286549704 | politics |
| 0.13599415204678364 | business |
| 0.09011695906432747 | family and society |
| 0.06532163742690057 | international athletics |
| 0.06479532163742689 | science and research |
| 0.05304093567251461 | art and publishing |
| 0.043391812865497086 | general music |
| 0.03362573099415204 | TV and film |
| 0.030116959064327483 | team sports |
| 0.01967836257309941 | international music |
+----------------------+-------------------------+
[10 rows x 2 columns]
###Markdown
__Quiz Question:__ What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results from 100 topic predictions.
###Code
print (average_predictions(topic_model,
turicreate.SArray([wiki_docs[int(
np.where(wiki['name']=='Steven Gerrard')[0])]]),
100))
###Output
+----------------------+-------------------------+
| average predictions | topics |
+----------------------+-------------------------+
| 0.6896000000000002 | team sports |
| 0.06559999999999996 | international athletics |
| 0.03688000000000003 | general music |
| 0.035120000000000026 | TV and film |
| 0.03508000000000003 | international music |
| 0.031760000000000024 | business |
| 0.03152000000000002 | politics |
| 0.025480000000000017 | family and society |
| 0.02492000000000002 | art and publishing |
| 0.024040000000000016 | science and research |
+----------------------+-------------------------+
[10 rows x 2 columns]
###Markdown
Comparing LDA to nearest neighbors for document retrievalSo far we have found that our topic model has learned some coherent topics, we have explored these topics as probability distributions over a vocabulary, and we have seen how individual documents in our Wikipedia data set are assigned to these topics in a way that corresponds with our expectations. In this section, we will use the predicted topic distribution as a representation of each document, similar to how we have previously represented documents by word count or TF-IDF. This gives us a way of computing distances between documents, so that we can run a nearest neighbors search for a given document based on its membership in the topics that we learned from LDA. We can contrast the results with those obtained by running nearest neighbors under the usual TF-IDF representation, an approach that we explored in a previous assignment. We'll start by creating the LDA topic distribution representation for each document:
###Code
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
###Output
_____no_output_____
###Markdown
Next we add the TF-IDF document representations:
###Code
wiki['word_count'] = turicreate.text_analytics.count_words(wiki['text'])
wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['word_count'])
###Output
_____no_output_____
###Markdown
For each of our two different document representations, we can use Turi Create to compute a brute-force nearest neighbors model:
###Code
model_tf_idf = turicreate.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
model_lda_rep = turicreate.nearest_neighbors.create(wiki, label='name', features=['lda'],
method='brute_force', distance='cosine')
###Output
_____no_output_____
###Markdown
Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
###Code
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10)
###Output
_____no_output_____
###Markdown
Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon words. Since similarity is defined based on the specific words used in the document, documents that are "close" under TF-IDF tend to be similar in terms of specific details. This is what we see in the example: the top 10 nearest neighbors are all economists from the US, UK, or Canada. Our LDA representation, on the other hand, defines similarity between documents in terms of their topic distributions. This means that documents can be "close" if they share similar themes, even though they may not share many of the same keywords. For the article on Paul Krugman, we expect the most important topics to be 'American college and politics' and 'science and research'. As a result, we see that the top 10 nearest neighbors are academics from a wide variety of fields, including literature, anthropology, and religious studies. Understanding the role of LDA model hyperparametersFinally, we'll take a look at the effect of the LDA model hyperparameters alpha and gamma on the characteristics of our fitted model. Recall that alpha is a parameter of the prior distribution over topic weights in each document, while gamma is a parameter of the prior distribution over word weights in each topic. In the video lectures, we saw that alpha and gamma can be thought of as smoothing parameters when we compute how much each document "likes" a topic (in the case of alpha) or how much each topic "likes" a word (in the case of gamma). In both cases, these parameters serve to reduce the differences across topics or words in terms of these calculated preferences; alpha makes the document preferences "smoother" over topics, and gamma makes the topic preferences "smoother" over words.Our goal in this section will be to understand how changing these parameter values affects the characteristics of the resulting topic model.__Quiz Question:__ What was the value of alpha used to fit our original topic model?
###Code
alex_rodriguez_tfidf = list(model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'],
label='name', k=5000)['reference_label'])
print ('value of k:', alex_rodriguez_tfidf.index('Mariano Rivera'))
###Output
_____no_output_____
###Markdown
__Quiz Question:__ What was the value of gamma used to fit our original topic model? Remember that Turi Create uses "beta" instead of "gamma" to refer to the hyperparameter that influences topic distributions over words.
###Code
alex_rodriguez_lda = list(model_lda_rep.query(wiki[wiki['name'] == 'Alex Rodriguez'],
label='name', k=5000)['reference_label'])
print ('value of k:', alex_rodriguez_lda.index('Mariano Rivera'))
topic_model.summary()
###Output
_____no_output_____
###Markdown
We'll start by loading some topic models that have been trained using different settings of alpha and gamma. Specifically, we will start by comparing the following two models to our original topic model: - tpm_low_alpha, a model trained with alpha = 1 and default gamma - tpm_high_alpha, a model trained with alpha = 50 and default gamma
###Code
tpm_low_alpha = turicreate.load_model('topic_models/lda_low_alpha')
tpm_high_alpha = turicreate.load_model('topic_models/lda_high_alpha')
###Output
_____no_output_____
###Markdown
Changing the hyperparameter alphaSince alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic weights for the Wikipedia article on Barack Obama under models fit with high, original, and low settings of alpha.
###Code
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1]
b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1]
c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1]
ind = np.arange(len(a))
width = 0.3
def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,ylab):
fig = plt.figure()
ax = fig.add_subplot(111)
b1 = ax.bar(ind, a, width, color='lightskyblue')
b2 = ax.bar(ind+width, b, width, color='lightcoral')
b3 = ax.bar(ind+(2*width), c, width, color='gold')
ax.set_xticks(ind+width)
ax.set_xticklabels(range(10))
ax.set_ylabel(ylab)
ax.set_xlabel(xlab)
ax.set_ylim(0,ylim)
ax.legend(handles = [b1,b2,b3],labels=['low '+param,'original model','high '+param])
plt.tight_layout()
param_bar_plot(a,b,c,ind,width,ylim=1.0,param='alpha',
xlab='Topics (sorted by weight of top 100 words)',ylab='Topic Probability for Obama Article')
###Output
_____no_output_____
###Markdown
Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics.__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **low alpha** model? Use the average results from 100 topic predictions.
###Code
paul_krugman = turicreate.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]])
paul_krugman_low_alpha = average_predictions(tpm_low_alpha, paul_krugman, 100)
val = paul_krugman_low_alpha[(paul_krugman_low_alpha['average predictions'] > 0.3) |
(paul_krugman_low_alpha['average predictions'] < 0.05)
]
print (val)
print (len(val))
###Output
+----------------------+----------------------+
| average predictions | topics |
+----------------------+----------------------+
| 0.4616666666666665 | art and publishing |
| 0.017160493827160506 | science and research |
| 0.016913580246913605 | general music |
| 0.013024691358024709 | family and society |
| 0.010864197530864211 | business |
| 0.010493827160493841 | politics |
| 0.010123456790123466 | TV and film |
+----------------------+----------------------+
[? rows x 2 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
7
###Markdown
__Quiz Question:__ How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the **high alpha** model? Use the average results from 100 topic predictions.
###Code
paul_krugman_high_alpha = average_predictions(tpm_high_alpha, paul_krugman, 100)
val = paul_krugman_high_alpha[(paul_krugman_high_alpha['average predictions'] > 0.3) |
(paul_krugman_high_alpha['average predictions'] < 0.05)
]
print (val)
print (len(val))
###Output
+---------------------+--------+
| average predictions | topics |
+---------------------+--------+
+---------------------+--------+
[? rows x 2 columns]
Note: Only the head of the SFrame is printed. This SFrame is lazily evaluated.
You can use sf.materialize() to force materialization.
0
###Markdown
Changing the hyperparameter gammaJust as we were able to see the effect of alpha by plotting topic weights for a document, we expect to be able to visualize the impact of changing gamma by plotting word weights for each topic. In this case, however, there are far too many words in our vocabulary to do this effectively. Instead, we'll plot the total weight of the top 100 words and bottom 1000 words for each topic. Below, we plot the (sorted) total weights of the top 100 words and bottom 1000 from each topic in the high, original, and low gamma models. Now we will consider the following two models: - tpm_low_gamma, a model trained with gamma = 0.02 and default alpha - tpm_high_gamma, a model trained with gamma = 0.5 and default alpha
###Code
del tpm_low_alpha
del tpm_high_alpha
tpm_low_gamma = turicreate.load_model('topic_models/lda_low_gamma')
tpm_high_gamma = turicreate.load_model('topic_models/lda_high_gamma')
a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
b_top = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
c_top = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1]
a_bot = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
b_bot = np.sort([sum(topic_model.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
c_bot = np.sort([sum(tpm_high_gamma.get_topics(topic_ids=[i], num_words=547462)[-1000:]['score']) for i in range(10)])[::-1]
ind = np.arange(len(a))
width = 0.3
param_bar_plot(a_top, b_top, c_top, ind, width, ylim=0.6, param='gamma',
xlab='Topics (sorted by weight of top 100 words)',
ylab='Total Probability of Top 100 Words')
param_bar_plot(a_bot, b_bot, c_bot, ind, width, ylim=0.0002, param='gamma',
xlab='Topics (sorted by weight of bottom 1000 words)',
ylab='Total Probability of Bottom 1000 Words')
###Output
_____no_output_____
###Markdown
From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics that have a smoother distribution of weight across all the words in the vocabulary. __Quiz Question:__ For each topic of the **low gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from Turi Create with the cdf\_cutoff argument).
###Code
def calculate_avg_words(model):
num_topics = 10
avg_num_of_words = []
for i in range(num_topics):
avg_num_of_words.append(len(model.get_topics(topic_ids=[i], num_words=547462, cdf_cutoff=.5)))
avg_num_of_words = np.mean(avg_num_of_words)
return avg_num_of_words
calculate_avg_words(tpm_low_gamma)
###Output
_____no_output_____
###Markdown
__Quiz Question:__ For each topic of the **high gamma model**, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get\_topics() function from Turi Create with the cdf\_cutoff argument).
###Code
calculate_avg_words(tpm_high_gamma)
###Output
_____no_output_____ |
5 - Pandas - Reading CSV and Basic Plotting.ipynb | ###Markdown
<img src="https://user-images.githubusercontent.com/7065401/75165824-badf4680-5701-11ea-9c5b-5475b0a33abf.png" style="width:300px; float: right; margin: 0 40px 40px 40px;"> Reading external data & Plotting[Source](https://blockchain.info/charts/market-price)  Hands on!
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Pandas can easily read data stored in different file formats like CSV, JSON, XML or even Excel. Parsing always involves specifying the correct structure, encoding and other details. The `read_csv` method reads CSV files and accepts many parameters.
###Code
pd.read_csv?
df = pd.read_csv('data/btc-market-price.csv')
df.head()
###Output
_____no_output_____
###Markdown
The CSV file we're reading has only two columns: `timestamp` and `price`. It doesn't have a header, it contains whitespaces and has values separated by commas. pandas automatically assigned the first row of data as headers, which is incorrect. We can overwrite this behavior with the `header` parameter:
###Code
df = pd.read_csv('data/btc-market-price.csv', header=None)
df.head()
###Output
_____no_output_____
###Markdown
We can then set the names of each column explicitely by setting the `df.columns` attribute:
###Code
df.columns = ['Timestamp', 'Price']
df.shape
df.head()
df.tail(3)
###Output
_____no_output_____
###Markdown
The type of the `Price` column was correctly interpreted as `float`, but the `Timestamp` was interpreted as a regular string (`object` in pandas notation):
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We can perform a vectorized operation to parse all the Timestamp values as `Datetime` objects:
###Code
pd.to_datetime(df['Timestamp']).head()
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
df.head()
df.dtypes
###Output
_____no_output_____
###Markdown
The timestamp looks a lot like the index of this `DataFrame`: `date > price`. We can change the autoincremental ID generated by pandas and use the `Timestamp DS` column as the Index:
###Code
df.set_index('Timestamp', inplace=True)
df.head()
df.loc['2017-09-29']
###Output
_____no_output_____
###Markdown
 Putting everything togetherAnd now, we've finally arrived to the final, desired version of the `DataFrame` parsed from our CSV file. The steps were:
###Code
df = pd.read_csv('data/btc-market-price.csv', header=None)
df.columns = ['Timestamp', 'Price']
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
df.set_index('Timestamp', inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
**There should be a better way**. And there is 😎. And there usually is, explicitly with all these repetitive tasks with pandas.The `read_csv` function is extremely powerful and you can specify many more parameters at import time. We can achive the same results with only one line by doing:
###Code
df = pd.read_csv(
'data/btc-market-price.csv',
header=None,
names=['Timestamp', 'Price'],
index_col=0,
parse_dates=True
)
df.head()
df.loc['2017-09-29']
###Output
_____no_output_____
###Markdown
 Plotting basics`pandas` integrates with Matplotlib and creating a plot is as simple as:
###Code
df.plot()
###Output
_____no_output_____
###Markdown
Behind the scenes, it's using `matplotlib.pyplot`'s interface. We can create a similar plot with the `plt.plot()` function:
###Code
plt.plot(df.index, df['Price'])
###Output
_____no_output_____
###Markdown
`plt.plot()` accepts many parameters, but the first two ones are the most important ones: the values for the `X` and `Y` axes. Another example:
###Code
x = np.arange(-10, 11)
plt.plot(x, x ** 2)
###Output
_____no_output_____
###Markdown
We're using `matplotlib`'s global API, which is horrible but it's the most popular one. We'll learn later how to use the _OOP_ API which will make our work much easier.
###Code
plt.plot(x, x ** 2)
plt.plot(x, -1 * (x ** 2))
###Output
_____no_output_____
###Markdown
Each `plt` function alters the global state. If you want to set settings of your plot you can use the `plt.figure` function. Others like `plt.title` keep altering the global plot:
###Code
plt.figure(figsize=(12, 6))
plt.plot(x, x ** 2)
plt.plot(x, -1 * (x ** 2))
plt.title('My Nice Plot')
###Output
_____no_output_____
###Markdown
Some of the arguments in `plt.figure` and `plt.plot` are available in the pandas' `plot` interface:
###Code
df.plot(figsize=(16, 9), title='Bitcoin Price 2017-2018')
###Output
_____no_output_____
###Markdown
 A more challenging parsingTo demonstrate plotting two columns together, we'll try to add Ether prices to our `df` DataFrame. The ETH prices data can be found in the `data/eth-price.csv` file. The problem is that it seems like that CSV file was created by someone who really hated programmers. Take a look at it and see how ugly it looks like. We'll still use `pandas` to parse it.
###Code
eth = pd.read_csv('data/eth-price.csv')
eth.head()
###Output
_____no_output_____
###Markdown
As you can see, it has a `Value` column (which represents the price), a `Date(UTC)` one that has a string representing dates and also a `UnixTimeStamp` date represeting the datetime in unix timestamp format. The header is read automatically, let's try to parse dates with the CSV Reader:
###Code
eth = pd.read_csv('data/eth-price.csv', parse_dates=True)
print(eth.dtypes)
eth.head()
###Output
Date(UTC) object
UnixTimeStamp int64
Value float64
dtype: object
###Markdown
Seems like the `parse_dates` attribute didn't work. We'll need to add a little bit more customization. Let's divide this problem and focus on the problem of "date parsing" first. The simplest option would be to use the `UnixTimeStamp` column. The `pandas` module has a `to_datetime` function that converts Unix timestamps to Datetime objects automatically:
###Code
pd.to_datetime(eth['UnixTimeStamp']).head()
###Output
_____no_output_____
###Markdown
The problem is the precision of unix timestamps. To match both columns we'll need to use the same index and, our `df` containing Bitcoin prices, is "per day":
###Code
df.head()
###Output
_____no_output_____
###Markdown
We could either, remove the precision of `UnixTimeStamp` or attempt to parse the `Date(UTC)`. Let's do String parsing of `Date(UTC)` for fun:
###Code
pd.to_datetime(eth['Date(UTC)']).head()
###Output
_____no_output_____
###Markdown
That seems to work fine! Why isn't it then parsing the `Date(UTC)` column? Simple, the `parse_dates=True` parameter will instruct pandas to parse the index of the `DataFrame`. If you want to parse any other column, you must explicitly pass the column position or name:
###Code
pd.read_csv('data/eth-price.csv', parse_dates=[0]).head()
###Output
_____no_output_____
###Markdown
Putting everything together again:
###Code
eth = pd.read_csv('data/eth-price.csv', parse_dates=True, index_col=0)
print(eth.info())
eth.head()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 362 entries, 2017-04-02 to 2018-04-01
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 UnixTimeStamp 362 non-null int64
1 Value 362 non-null float64
dtypes: float64(1), int64(1)
memory usage: 8.5 KB
None
###Markdown
We can now combine both `DataFrame`s into one. Both have the same index, so aligning both prices will be easy. Let's first create an empty `DataFrame` and with the index from Bitcoin prices:
###Code
prices = pd.DataFrame(index=df.index)
prices.head()
###Output
_____no_output_____
###Markdown
And we can now just set columns from the other `DataFrame`s:
###Code
prices['Bitcoin'] = df['Price']
prices['Ether'] = eth['Value']
prices.head()
###Output
_____no_output_____
###Markdown
We can now try plotting both values:
###Code
prices.plot(figsize=(12, 6))
###Output
_____no_output_____
###Markdown
🤔seems like there's a tiny gap between Dec 2017 and Jan 2018. Let's zoom in there:
###Code
prices.loc['2017-12-01':'2018-01-01'].plot(figsize=(12, 6))
###Output
_____no_output_____ |
.ipynb_checkpoints/Tutorial 6 - Numerical Python-checkpoint.ipynb | ###Markdown
BASIC PYTHON FOR RESEARCHERS _by_ [**_Megat Harun Al Rashid bin Megat Ahmad_**](https://www.researchgate.net/profile/Megat_Harun_Megat_Ahmad) last updated: April 14, 2016 ------- _6. Numerical Python_ In _Python_, a list is considered as an array and operating on each element of the list can be done by accessing them sequentially (_e.g._ by using $for$ loop function).
###Code
# Without Numpy, to get the square root on a list of numbers:
from math import *
x1 = [23,12,45]
for y, z in enumerate(x1):
Ex1 = sqrt(z)
print Ex1
###Output
4.79583152331
3.46410161514
6.7082039325
###Markdown
With $NumPy$, elements in the list can be operated directly without the need to access them sequentially. This can be done by transforming the _Python_ list into a $NumPy$ array. _Python_ utilizes this $NumPy$ library as part of its fundamental package for scientific computing. Its major function is to allow easy operations especially on and between homogeneous multi-dimensional arrays. Available features in $NumPy$ can be accessed by importing the library.
###Code
from numpy import *
# With Numpy, to get the square root on a list of numbers:
# Creating the Python list
x1 = [23,12,45]
# Transforming the Python list into NumPy array
xnp = array(x1)
# Here we use the available NumPy square root function
Ex1 = sqrt(xnp)
print xnp
print Ex1
###Output
[23 12 45]
[ 4.79583152 3.46410162 6.70820393]
###Markdown
Elements of $NumPy$ array are not separated by comma when printed. They are separated only by space. Performing mathematical operation between arrays requires part of these arrays to be of the same size (_e.g._ 5 x 10 array can be multiplied with another 5 x 10 array or 5 x 1 array). This will be explored further).
###Code
# Array operation on real and imaginary parts to obtain complex conjugate
i = array([12.7,9.3,0.8])
j = array([4.5j,2.7j,3.1j])
print (i + j)*(i + j).conjugate()
###Output
[ 181.54+0.j 93.78+0.j 10.25+0.j]
###Markdown
*** **_6.1 Features of NumPy Array_** Creating an array using $NumPy$ can be done in several ways. In all the following examples, $NumPy$ is imported as $np$. Therefore $NumPy$ can be initially accessed by using $np.\{function\}$. This allows the separate usage of similar wordings $NumPy$ and _Python_ $math$ function like $sqrt$. It is also possible to use $NumPy$ function names directly that have no equivalent in _Python_ functions.
###Code
import numpy as np
# Creating a one-dimensional array with sequential integer values
a = np.arange(15)
a
###Output
_____no_output_____
###Markdown
This create a one dimensional array with 15 elements starting from 0 to 14. The range and increment of the array elements during array creation can be specified by using the _(min, max, add)_ arguments in the $arange$ function.
###Code
'''Below means that the g array is going to contains range of elements with minimum
value of 10 (inclusive of 10) up to a maximum value of 50 (exclusive of or less than 50) and with
an increment of 5'''
g = np.arange(10,50,5)
print g
# To include 50, just set 50 to 51
g = np.arange(10,51,5)
print g
###Output
[10 15 20 25 30 35 40 45 50]
###Markdown
Another useful array creating function is $linspace$ which allows specifying (inclusive both) the minimum and maximum values and the number of array elements.
###Code
x = np.linspace(0,2*np.pi,20) # note: np.pi is used instead of just pi
print x
# Checking the inclusivity
print x.min()
print x.max()
print x.size
###Output
0.0
6.28318530718
20
###Markdown
To create a higher dimensional array, we can use any one of these functions and then use the $reshape$ function. Let reshapes variable **_a_** into a two dimensional 3 x 5 array.
###Code
# Creating a two-dimensional 3 x 5 array from b
b = a.reshape(3,5)
b
###Output
_____no_output_____
###Markdown
Shaping a three dimensional 3 x 3 x 3 array:
###Code
c = np.arange(27).reshape(3,3,3)
print c
###Output
[[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]]
[[ 9 10 11]
[12 13 14]
[15 16 17]]
[[18 19 20]
[21 22 23]
[24 25 26]]]
###Markdown
We can check the shape, dimension and size of these arrays, as well as the mean and median of the arrays.
###Code
print c.shape # shape of array c
print b.ndim # dimension of array b
print a.size # size of array a or the number of elements in array a
# mean value for the c array
np.mean(c)
# mean value for b array elements in each column
np.mean(b, axis=0)
# mean value for b array elements in each row
np.mean(b, axis=1)
# average value for a all array elements
np.average(a)
# sum values for c array elements in axis 2
np.sum(c,axis=2)
###Output
_____no_output_____
###Markdown
Earlier in this tutorial, a simple mathematical operations for complex number array has been shown. $NumPy$ arrays can only be operated with each other when the arrays are of the same size and dimension.
###Code
# Finding the sin^2(x) + cos^2(x) (trigonometric identity)
np.sin(x)**2 + np.cos(x)**2 # note: np.sin and np.cos are used instead of just sin and cos
z = np.linspace(1,5.347,20)
x*z # note: arrays of the same size and dimension
a1 = np.linspace(1,8.75,12).reshape(3,4) # a1 is 3 x 4 array
a1
b1 = np.linspace(1,4,4) # b1 is 1 x 4 array
b1
a1*b1
###Output
_____no_output_____
###Markdown
It is possible to operate this **_a1_** with **_b1_** as one of part of the array (the column) is equivalent in size. |array|row|column||---|---|---||**_a1_**|$$3$$|$$4$$||**_b1_**|$$1$$|$$4$$| Let **_c1_** be a 1 x 3 array:
###Code
c1 = np.linspace(1,3,3)
c1
a1*c1
###Output
_____no_output_____
###Markdown
The interpreter produces an error as neither row nor column of **_a1_** is equivalent to **_c1_**. |array|row|column||---|---|---||**_a1_**|$$3$$|$$4$$||**_c1_**|$$1$$|$$3$$| It is still possible to operate **_a1_** with **_c1_**. This can be done by transposing **_a1_** (**_a1_** now becomes 4 x 3 array, so the column size is the same as **_c1_** column size).
###Code
a1.T*c1
###Output
_____no_output_____
###Markdown
Other type of arrays are array with ones and zeroes. These are flexible arrays and can be used for many purposes _e.g._ indexing collision particles in two dimensions. There is also the $mgrid$ function that is similar to meshgrid in MATLAB.
###Code
y = np.ones((10,10))
print y
z = np.zeros((5,5))
print z
i, j = np.mgrid[0:4, 0:4] # similar to meshgrid in MATLAB
i
j
###Output
_____no_output_____
###Markdown
Exercise 6.1: Create this 2-dimensional array using NumPy: $$\left[ \begin{array}{cc}0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0\\1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0\end{array} \right]$$
###Code
m, n = np.mgrid[0:10, 0:10]
(m > n)*1
###Output
_____no_output_____
###Markdown
---$NumPy$ array can be sliced, extracted and reassigned similar to the operations on _Python_ sequences.
###Code
# Slicing
x = np.linspace(0,2*np.pi,20)
print x
print x[3:5]
y = np.arange(15).reshape(3,5)
print y
print y[1] # the whole row of index 1
z = y[0:2,1:4] # assigning to variable
z
y[0:2,1:4] = 0 # reassigned
y
# reassigned with new array
y[0:2,1:4] = np.array([[45,89,12],[12,73,69]])
y
###Output
_____no_output_____
###Markdown
There are many other ways to create $NumPy$ array, some are quite fancy _e.g._:
###Code
X = np.array([n**np.e for n in range(5)]) # note: np.e represent exponential constant
X
# i is row index and j is column index
W = np.array([[j+i*3 for j in range(3)] for i in range(3)])
W
###Output
_____no_output_____
###Markdown
*** **_6.2 File Input/Output_** $NumPy$ array can be saved into a binary file which later can be loaded. 1-D and 2-D arrays can also be saved in text format.
###Code
'''Let us build a 3-dimensional 5 x 5 x 5 array
containing elements created using random number'''
rand_array = np.random.rand(4,4,4)
rand_array
###Output
_____no_output_____
###Markdown
_(The default random number generator algorithm is Mersenne-Twister)_
###Code
np.save("Tutorial6/random_3D_array.npy", rand_array)
np.load("Tutorial6/random_3D_array.npy")
rand2_array = np.random.rand(4,4)
###Output
_____no_output_____
###Markdown
The coming examples show how to save array into text file.
###Code
np.savetxt("Tutorial6/random_2D_array.txt", rand2_array, delimiter=',')
###Output
_____no_output_____
###Markdown
In the above example, the array _rand2_array_ is saved as comma separated values by specifying comma as the delimiter used.
###Code
fh_csv = open("Tutorial6/random_2D_array.txt")
fh_csvList = fh_csv.readlines()
fh_csvList
###Output
_____no_output_____
###Markdown
It is possible to specify or formatted the decimal value when saving a text file.
###Code
np.savetxt("Tutorial6/random_2D_arrayF.txt", rand2_array, \
delimiter=',', fmt='%.3f')
fh_csv1 = open("Tutorial6/random_2D_arrayF.txt")
fh_csv1List = fh_csv1.readlines()
fh_csv1List
###Output
_____no_output_____
###Markdown
Similar with binary file, a text file can be loaded as an array with delimiter used specified.
###Code
fromtxtFile = np.loadtxt("Tutorial6/random_2D_arrayF.txt", delimiter=',')
fromtxtFile
###Output
_____no_output_____
###Markdown
*** **_6.3 Matrix Operations_** What we have seen previously were scalar array operations. $NumPy$ array can also undergoes matrix operations. $NumPy$ has special functions for matrix operations.
###Code
# Creating two arrays with random numbers, M1 and M2
M1 = np.random.rand(3,3)*np.linspace(1,36,9).reshape(3,3)
M2 = np.random.rand(3,3)*np.linspace(1,2*np.e,9).reshape(3,3)
M1, M2
# dot product
np.dot(M1,M2)
N1 = np.arange(2,8,2)
N1
np.dot(M1,N1)
np.dot(N1,N1)
###Output
_____no_output_____
###Markdown
Standard arithmetic operators (+, -, `*`) can be used for matrix operations by casting the $NumPy$ array as of type matrix.
###Code
W1 = np.matrix(M1)
W1
Y = np.matrix(N1).T
Y
W2 = np.matrix(M2)
W1*W2 # instead of using np.dot() function
###Output
_____no_output_____
###Markdown
Matrix operations must adhered to standard matrix agebra and it is good practice to check the shape of the matrices before any operations.
###Code
np.shape(W1), np.shape(Y)
Y + W1*Y
###Output
_____no_output_____
###Markdown
A few others mathematical functions on matrix include:
###Code
# Finding determinant
np.linalg.det(W1)
# Inverse matrix
np.linalg.inv(W1)
np.linalg.inv(W1)*W1
###Output
_____no_output_____ |
Africa/GEBCO_Bouguer_Freeair_correlograms.Africa30x30.ipynb | ###Markdown
MIT LicenseCopyright (c) 2019 Alexey Pechnikov, https://orcid.org/0000-0001-9626-8615 (ORCID)
###Code
import xarray as xr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Define functions
###Code
from scipy.ndimage.filters import gaussian_filter
# band filter
def raster_gamma_range(raster0, g1, g2):
raster = raster0.copy()
raster.values = raster.values.astype(np.float32)
raster.values = gaussian_filter(raster.values,g1) \
- gaussian_filter(raster.values,g2)
return raster
# dem, srtm
def correlogram(raster1, raster2, gammas):
# spatial filtering
rasters1 = []
rasters2 = []
for g in gammas:
print (g,". ", end = '')
_raster1 = raster_gamma_range(raster1, g-.5, g+.5)
rasters1.append(_raster1)
_raster2 = raster_gamma_range(raster2, g-.5, g+.5)
rasters2.append(_raster2)
print ()
corrs = []
for ridx in range(len(gammas)):
print (ridx+1,". ", end = '')
_raster2 = rasters2[ridx]
for didx in range(len(gammas)):
_raster1 = rasters1[didx]
df = pd.DataFrame({'raster1': _raster1.values.flatten(), 'raster2': _raster2.values.flatten()})
corr = round((df.corr()).iloc[0,1],2)
corrs.append(corr)
da_corr = xr.DataArray(np.array(corrs).reshape([len(gammas),len(gammas)]),
coords=[resolution*gammas,resolution*gammas],
dims=['raster2','raster1'])
return (rasters1, rasters2, da_corr)
###Output
_____no_output_____
###Markdown
Define parameters
###Code
# to load source data
GEBCO="GEBCO_2019.Africa30x30.tif"
BOUGUER="WGM2012_Bouguer_ponc_2min.Africa30x30.tif"
FREEAIR="WGM2012_Freeair_ponc_2min.Africa30x30.tif"
# rasters below defined in decimal degrees
# this coefficient [km/pixel] for pixel-based filtering and plotting
resolution = 3.7
GAMMA = 28
DGAMMA= 1
###Output
_____no_output_____
###Markdown
Load datasets
###Code
dem = xr.open_rasterio(GEBCO).rename({'x':'lon','y':'lat'})
dem
bgr = xr.open_rasterio(BOUGUER).rename({'x':'lon','y':'lat'})
bgr
frr = xr.open_rasterio(FREEAIR).rename({'x':'lon','y':'lat'})
frr
###Output
_____no_output_____
###Markdown
Compare source datasets
###Code
fig, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(15,5))
dem.plot(ax=ax1, cmap='terrain')
ax1.set_title('GEBCO_2019',fontsize=16)
bgr.plot(ax=ax2, cmap='terrain')
ax2.set_title('WGM2012 Bouguer',fontsize=16)
frr.plot(ax=ax3, cmap='terrain')
ax3.set_title('WGM2012 Free-Air',fontsize=16)
fig.tight_layout(rect=[0.03, 0.0, 1, 0.9])
plt.suptitle('GEBCO_2019 and WGM2012 Bouguer and Free-Air Gravity Anomalies',fontsize=20)
#plt.savefig('GEBCO_2019 and WGM2012 Bouguer and Free-Air Gravity Anomalies.jpg', dpi=150)
plt.show()
###Output
_____no_output_____
###Markdown
Make correlogram
###Code
gammas = np.arange(DGAMMA,GAMMA+DGAMMA/2,DGAMMA)
(dems,bgrs,da_bgr_corr) = correlogram(dem, bgr, gammas)
(dems,frrs,da_frr_corr) = correlogram(dem, frr, gammas)
float(da_bgr_corr.min()),float(da_bgr_corr.max())
float(da_frr_corr.min()),float(da_frr_corr.max())
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10.5,5))
da_bgr_corr.plot(cmap='RdBu_r',ax=ax1, vmin=-1,vmax=1)
contours = da_bgr_corr.plot.contour(levels=[-.75,-.5],colors=['lightgray','gray'],linestyles='--',ax=ax1)
ax2.clabel(contours, contours.levels, inline=True, fmt='%r', colors=['white','gainsboro'], fontsize=14)
ax1.set_xlabel('GEBCO_2019 Wavelength, km',fontsize=12)
ax1.set_ylabel('WGM2012 Gravity Wavelength, km',fontsize=12)
ax1.set_title('WGM2012 Bouguer',fontsize=16)
da_frr_corr.plot(cmap='RdBu_r',ax=ax2, vmin=-1,vmax=1)
contours = da_frr_corr.plot.contour(levels=[.5,.75],colors=['gray','lightgray'],linestyles='--',ax=ax2)
ax2.clabel(contours, contours.levels, inline=True, fmt='%r', colors=['gainsboro','white'], fontsize=14)
ax2.set_xlabel('GEBCO_2019 Wavelength, km',fontsize=12)
ax2.set_ylabel('WGM2012 Gravity Wavelength, km',fontsize=12)
ax2.set_title('WGM2012 Free-Air',fontsize=16)
plt.suptitle('Pearson Correlation Coefficient:\nGEBCO_2019 and WGM2012 Bouguer and Free-Air Gravity Anomalies',fontsize=20)
fig.tight_layout(rect=[0.03, 0.0, 1, 0.85])
#plt.savefig('Correlation GEBCO_2019 and WGM2012 Bouguer and Free-Air Gravity Anomalies.jpg', dpi=150)
plt.show()
###Output
_____no_output_____ |
doc/ex4_update_in_place.ipynb | ###Markdown
In-Place Waveform Library UpdatesThis example notebook shows how one can update pulses data in-place without recompiling.© Raytheon BBN Technologies 2020 Set the `SAVE_WF_OFFSETS` flag in order that QGL will output a map of the waveform data within the compiled binary waveform library.
###Code
from QGL import *
import QGL
import os.path
import pickle
QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True
###Output
_____no_output_____
###Markdown
Create the usual channel library with a couple of AWGs.
###Code
cl = ChannelLibrary(":memory:")
q1 = cl.new_qubit("q1")
aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
dig_1 = cl.new_X6("X6_1", address=0)
h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30)
h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30)
cl.set_control(q1, aps2_1, generator=h1)
cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)
cl.set_master(aps2_1, aps2_1.ch("m2"))
cl["q1"].measure_chan.frequency = 0e6
cl.commit()
###Output
_____no_output_____
###Markdown
Compile a simple sequence.
###Code
mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11))
plot_pulse_files(mf, time=True)
###Output
_____no_output_____
###Markdown
Open the offsets file (in the same directory as the `.aps2` files, one per AWG slice.)
###Code
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
offsets
###Output
_____no_output_____
###Markdown
Let's replace every single pulse with a fixed amplitude `Utheta`
###Code
pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
###Output
_____no_output_____
###Markdown
We see that the data in the file has been updated.
###Code
plot_pulse_files(mf, time=True)
###Output
_____no_output_____
###Markdown
Profiling How long does this take?
###Code
%timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100))
###Output
_____no_output_____
###Markdown
Getting the offsets is fast, and only needs to be done once
###Code
def get_offsets():
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
return offsets
%timeit offsets = get_offsets()
%timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
%timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
# %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets)
###Output
_____no_output_____
###Markdown
In-Place Waveform Library UpdatesThis example notebook shows how one can update pulses data in-place without recompiling.© Raytheon BBN Technologies 2020 Set the `SAVE_WF_OFFSETS` flag in order that QGL will output a map of the waveform data within the compiled binary waveform library.
###Code
from QGL import *
import QGL
import os.path
import pickle
QGL.drivers.APS2Pattern.SAVE_WF_OFFSETS = True
###Output
_____no_output_____
###Markdown
Create the usual channel library with a couple of AWGs.
###Code
cl = ChannelLibrary(":memory:")
q1 = cl.new_qubit("q1")
aps2_1 = cl.new_APS2("BBNAPS1", address="192.168.5.101")
aps2_2 = cl.new_APS2("BBNAPS2", address="192.168.5.102")
dig_1 = cl.new_X6("X6_1", address=0)
h1 = cl.new_source("Holz1", "HolzworthHS9000", "HS9004A-009-1", power=-30)
h2 = cl.new_source("Holz2", "HolzworthHS9000", "HS9004A-009-2", power=-30)
cl.set_control(q1, aps2_1, generator=h1)
cl.set_measure(q1, aps2_2, dig_1.ch(1), generator=h2)
cl.set_master(aps2_1, aps2_1.ch("m2"))
cl["q1"].measure_chan.frequency = 0e6
cl.commit()
###Output
Creating engine...
###Markdown
Compile a simple sequence.
###Code
mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 11))
plot_pulse_files(mf, time=True)
###Output
Compiled 11 sequences.
<module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
<module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
###Markdown
Open the offsets file (in the same directory as the `.aps2` files, one per AWG slice.)
###Code
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
offsets
###Output
_____no_output_____
###Markdown
Let's replace every single pulse with a fixed amplitude `Utheta`
###Code
pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
###Output
_____no_output_____
###Markdown
We see that the data in the file has been updated.
###Code
plot_pulse_files(mf, time=True)
###Output
<module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
<module 'QGL.drivers.APS2Pattern' from '/Users/growland/workspace/QGL/QGL/drivers/APS2Pattern.py'>
###Markdown
Profiling How long does this take?
###Code
%timeit mf = RabiAmp(cl["q1"], np.linspace(-1, 1, 100))
###Output
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
Compiled 100 sequences.
317 ms ± 6.15 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Getting the offsets is fast, and only needs to be done once
###Code
def get_offsets():
offset_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.offsets")
with open(offset_f, "rb") as FID:
offsets = pickle.load(FID)
return offsets
%timeit offsets = get_offsets()
%timeit pulses = {l: Utheta(q1, amp=0.1, phase=0) for l in offsets}
wfm_f = os.path.join(os.path.dirname(mf), "Rabi-BBNAPS1.aps2")
%timeit QGL.drivers.APS2Pattern.update_wf_library(wfm_f, pulses, offsets)
# %timeit QGL.drivers.APS2Pattern.update_wf_library("/Users/growland/workspace/AWG/Rabi/Rabi-BBNAPS1.aps2", pulses, offsets)
###Output
1.25 ms ± 19.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
|
step01_CNNmodel_02-CV0.ipynb | ###Markdown
helper Functions
###Code
# Helper functions
def get_training_augmentation():
train_transform = [
albu.ShiftScaleRotate(scale_limit=0.05, rotate_limit=30, shift_limit=0.05, p=1, border_mode=0),
albu.IAAAdditiveGaussianNoise(p=0.1),
albu.IAAPerspective(p=0.2),
albu.OneOf(
[
albu.CLAHE(p=1),
albu.RandomBrightness(p=1),
albu.RandomGamma(p=1),
],
p=0.3,
),
albu.OneOf(
[
albu.IAASharpen(p=1),
albu.Blur(blur_limit=3, p=1),
albu.MotionBlur(blur_limit=3, p=1),
],
p=0.3,
),
albu.OneOf(
[
albu.RandomContrast(p=1),
albu.HueSaturationValue(p=1),
],
p=0.3,
),
]
return albu.Compose(train_transform)
def get_validation_augmentation():
"""Add paddings to make image shape divisible by 32"""
'''
test_transform = [
albu.PadIfNeeded(384, 480)
]
'''
return albu.Compose(test_transform)
def to_tensor(x, **kwargs):
return x.transpose(2, 0, 1).astype('float32')
data_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
def window(img, WL=50, WW=350):
upper, lower = WL+WW//2, WL-WW//2
X = np.clip(img.copy(), lower, upper)
X = X - np.min(X)
X = X / np.max(X)
X = (X*255.0).astype('uint8')
return X
class Dataset(BaseDataset):
def __init__(
self,
dataframe=None,
augmentation=None,
transform=None,
dirPath=None,
):
self.dataframe = dataframe
self.ids = self.dataframe.index.values.tolist()
self.augmentation = augmentation
self.transform=transform
self.dirPath = dirPath
def __getitem__(self, i):
thisID = self.ids[i]
#jpgPath = imageID2pathDict[thisID]
#image = cv2.imread(jpgPath)
dcmPath = imageID2pathDict[thisID]
dcm_data = dcmread(dcmPath)
image = dcm_data.pixel_array * int(dcm_data.RescaleSlope) + int(dcm_data.RescaleIntercept)
image = np.stack([window(image, WL=-600, WW=1500),
window(image, WL=40, WW=400),
window(image, WL=100, WW=700)], 2)
target = gtLabelDict[thisID]
target = target.astype(np.float32)
# apply augmentations
if self.augmentation:
sample = self.augmentation(image=image)
image = sample['image']
'''
# apply preprocessing
if self.preprocessing:
sample = self.preprocessing(image=image)
image = sample['image']
'''
image = image.astype(np.float32)
#image = np.rollaxis(image, -1, 0)
#image = image.transpose((2, 0, 1))
if self.transform:
image = self.transform(image)
return image, target
def __len__(self):
return len(self.ids)
def resnet50_model():
myModel = models.resnet50(pretrained=True)
num_ftrs = myModel.fc.in_features
myModel.fc = nn.Sequential(
nn.Linear(num_ftrs, 256),
nn.ReLU(),
nn.Dropout(p = 0.2),
nn.Linear(256, 64),
nn.ReLU(),
nn.Linear(64, 8),
)
return myModel
# Custom weighted loss function
class customWeightedBCEwithLogits(nn.Module):
def __init__(self, PE_pos_weight = 3.0, other_pos_weight = [30.0, 30.0, 3.0, 3.0, 3.0, 1.2, 0.5]):
super(customWeightedBCEwithLogits, self).__init__()
self.image_PE_PosWeight = torch.tensor(PE_pos_weight, requires_grad=False).cuda()
self.otherLabels_PosWeight = torch.tensor(other_pos_weight, requires_grad=False).cuda()
self.criterion1 = nn.BCEWithLogitsLoss(pos_weight=self.image_PE_PosWeight)
self.criterion2 = nn.BCEWithLogitsLoss(pos_weight=self.otherLabels_PosWeight)
def forward(self, inputs, targets):
loss1 = self.criterion1(inputs[:,0:1], targets[:,0:1])
loss2 = self.criterion2(inputs[:,1:], targets[:,1:])
return loss1+loss2
def train_loop(model, train_loader, valid_loader):
# Train one epoch
train_total = train_correct = train_cost = 0
model.train()
for x, y in tqdm(train_loader):
x = x.cuda()
y = y.cuda()
optimizer.zero_grad()
z = model(x)
train_total += y.size(0)
train_correct += ((torch.sigmoid(z[:,0])>0.5) == (y[:,0]>0.5)).sum().item()
loss = customLoss(z, y)
loss.backward()
optimizer.step()
train_cost += loss.item()
return train_cost/train_total, train_correct/train_total
def valid_loop(model, train_loader, valid_loader):
# Evaluate on validation data
val_total = val_correct = val_cost = 0
model.eval()
with torch.no_grad():
for x_val, y_val in tqdm(valid_loader):
x_val = x_val.cuda()
y_val = y_val.cuda()
z = model(x_val)
val_total += y_val.size(0)
val_correct += ((torch.sigmoid(z[:,0])>0.5) == (y_val[:,0]>0.5)).sum().item()
loss = customLoss(z, y_val)
val_cost += loss.item()
return val_cost/val_total, val_correct/val_total
def main_loop(n_epochs, model, train_loader, valid_loader):
for epoch in range(n_epochs):
print('epoch ' + str(epoch) + ':')
train_avgCost, train_acc = train_loop(model, train_loader, valid_loader)
val_avgCost, val_acc = valid_loop(model, train_loader, valid_loader)
print('train_cost: %.4f, train_acc: %.4f, val_cost: %.4f, val_acc: %.4f'\
% (train_avgCost, train_acc, val_avgCost, val_acc))
now = datetime.now().strftime("%Y%m%d_%H%M")
modelPath = 'models/CNNmodel/CNNmodel_01_cv0_epoch' + str(epoch) + '_' + now +'.pth'
print('saving: ',modelPath)
torch.save(model, modelPath)
myModel = resnet50_model()
myModel = myModel.cuda()
# Prepare train variables and parameters
col_names = ['train_cost', 'train_acc', 'val_cost', 'val_acc']
resultsDF = pd.DataFrame(columns = col_names)
epochCount = 0
optimizer =torch.optim.Adam(myModel.parameters(), lr=0.00005)
customLoss = customWeightedBCEwithLogits()
# prepare dataset and dataloader
preTrainDF = dataDF[dataDF['fold']==4]
trainDF = dataDF[dataDF['fold']!=0]
valDF = dataDF[dataDF['fold']==0]
my_pretrain_dataset = Dataset(
dataframe= preTrainDF,
augmentation=get_training_augmentation(),
transform=data_transform,
)
my_train_dataset = Dataset(
dataframe= trainDF,
augmentation=get_training_augmentation(),
transform=data_transform,
)
my_val_dataset = Dataset(
dataframe= valDF,
augmentation=None,
transform=data_transform,
)
myPreTrainLoader = DataLoader(my_pretrain_dataset, batch_size=48, shuffle=True, num_workers=4)
myTrainLoader = DataLoader(my_train_dataset, batch_size=42, shuffle=True, num_workers=4)
myValidLoader = DataLoader(my_val_dataset, batch_size=42, shuffle=True, num_workers=4)
# Sanity Check
print(my_train_dataset.__len__())
oneItem = my_pretrain_dataset.__getitem__(35)
print('label:', oneItem[1])
print(oneItem[1].shape)
print('image shape:', oneItem[0].shape)
for eachInd in range(3):
plt.figure()
plt.imshow(oneItem[0][eachInd,:,:], cmap='gray')
###Output
1429585
label: [0. 0. 0. 0. 0. 0. 0. 1.]
(8,)
image shape: torch.Size([3, 512, 512])
###Markdown
pre-Training
###Code
ind = 0
for name, child in myModel.named_children():
for name2, params in child.named_parameters():
print('block index:', str(ind), name, name2)
ind = ind +1
# Freeze everything except block index 9
trainBlock = [9]
ind = 0
for name, child in myModel.named_children():
if ind not in trainBlock:
for name2, params in child.named_parameters():
params.requires_grad = False
ind = ind +1
main_loop(1, myModel, myPreTrainLoader, myValidLoader)
###Output
epoch 0:
###Markdown
Train for 3 more epochs
###Code
#myModel = torch.load('models/CNNmodel/CNNmodel_01_epoch0_20201008_0038.pth')
#myModel.cuda()
#torch.cuda.empty_cache()
# Unfreeze everything before further training
for name, child in myModel.named_children():
for name2, params in child.named_parameters():
params.requires_grad = True
# Train for 3 more epochs
main_loop(3, myModel, myTrainLoader, myValidLoader)
# Train for 3 more epochs
main_loop(1, myModel, myTrainLoader, myValidLoader)
###Output
epoch 0:
|
Original_iLQR/ilqr-master/examples/rendezvous.ipynb | ###Markdown
Multi-Vehicle Rendezvous ProblemThe dynamics model of an omnidirectional vehicle with friction coefficient$\alpha$ is defined by the following equation:$$m \dot{\textbf{v}} = \textbf{u} - \alpha \textbf{v}$$iLQR is applied to a two vehicle system in order to control them to gentlycollide with each other with a terminal velocity of $0 \frac{m}{s}$.The state vector $\textbf{x}$ is defined as follows:$$\begin{equation*}\textbf{x} = \begin{bmatrix} x_0 & y_0 & x_1 & y_1 & \dot{x}_0 & \dot{y}_0 & \dot{x}_1 & \dot{y}_1 \end{bmatrix}^T\end{equation*}$$The action vector $\textbf{u}$ is defined as follows:$$\begin{equation*}\textbf{u} = \begin{bmatrix} F_{x_0} & F_{y_0} & F_{x_1} & F_{y_1} \end{bmatrix}^T\end{equation*}$$**Note**: That since this dynamics model is linear, this problem can be solvedmore efficiently with a simple Linear Quadratic Regulator (LQR) instead. Thisexample is just used to demonstrate how to setup an auto-differentiateddynamics model.
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
import theano.tensor as T
import matplotlib.pyplot as plt
from ilqr import iLQR
from ilqr.cost import QRCost
from ilqr.dynamics import AutoDiffDynamics
def on_iteration(iteration_count, xs, us, J_opt, accepted, converged):
J_hist.append(J_opt)
info = "converged" if converged else ("accepted" if accepted else "failed")
print("iteration", iteration_count, info, J_opt)
x_inputs = [
T.dscalar("x_0"),
T.dscalar("y_0"),
T.dscalar("x_1"),
T.dscalar("y_1"),
T.dscalar("x_0_dot"),
T.dscalar("y_0_dot"),
T.dscalar("x_1_dot"),
T.dscalar("y_1_dot"),
]
u_inputs = [
T.dscalar("F_x_0"),
T.dscalar("F_y_0"),
T.dscalar("F_x_1"),
T.dscalar("F_y_1"),
]
dt = 0.1 # Discrete time step.
m = 1.0 # Mass.
alpha = 0.1 # Friction coefficient.
# Acceleration.
def acceleration(x_dot, u):
x_dot_dot = x_dot * (1 - alpha * dt / m) + u * dt / m
return x_dot_dot
# Discrete dynamics model definition.
f = T.stack([
x_inputs[0] + x_inputs[4] * dt,
x_inputs[1] + x_inputs[5] * dt,
x_inputs[2] + x_inputs[6] * dt,
x_inputs[3] + x_inputs[7] * dt,
x_inputs[4] + acceleration(x_inputs[4], u_inputs[0]) * dt,
x_inputs[5] + acceleration(x_inputs[5], u_inputs[1]) * dt,
x_inputs[6] + acceleration(x_inputs[6], u_inputs[2]) * dt,
x_inputs[7] + acceleration(x_inputs[7], u_inputs[3]) * dt,
])
dynamics = AutoDiffDynamics(f, x_inputs, u_inputs)
###Output
_____no_output_____
###Markdown
An instantaneous cost function $l(\textbf{x}_t, \textbf{u}_t)$ is defined as follows:$$l(\textbf{x}_t, \textbf{u}_t) = \textbf{x}_t^T Q \textbf{x}_t + \textbf{u}_t^T R \textbf{u}_t$$where $Q$ is the state error and $R$ is the control error.In order to approach the two vehicles to each other, $Q$ is set up to penalize differences in positions as $||\textbf{x}_0 - \textbf{x}_1||^2$ while penalizing non-zero velocities.
###Code
Q = np.eye(dynamics.state_size)
Q[0, 2] = Q[2, 0] = -1
Q[1, 3] = Q[3, 1] = -1
R = 0.1 * np.eye(dynamics.action_size)
cost = QRCost(Q, R)
###Output
_____no_output_____
###Markdown
The vehicles are initialized at $(0, 0)$ and $(10, 10)$ with velocities $(0, -5)$ and $(5, 0)$ respectively.
###Code
N = 200 # Number of time steps in trajectory.
x0 = np.array([0, 0, 10, 10, 0, -5, 5, 0]) # Initial state.
# Random initial action path.
us_init = np.random.uniform(-1, 1, (N, dynamics.action_size))
J_hist = []
ilqr = iLQR(dynamics, cost, N)
xs, us = ilqr.fit(x0, us_init, on_iteration=on_iteration)
x_0 = xs[:, 0]
y_0 = xs[:, 1]
x_1 = xs[:, 2]
y_1 = xs[:, 3]
x_0_dot = xs[:, 4]
y_0_dot = xs[:, 5]
x_1_dot = xs[:, 6]
y_1_dot = xs[:, 7]
_ = plt.title("Trajectory of the two omnidirectional vehicles")
_ = plt.plot(x_0, y_0, "r")
_ = plt.plot(x_1, y_1, "b")
_ = plt.legend(["Vehicle 1", "Vehicle 2"])
t = np.arange(N + 1) * dt
_ = plt.plot(t, x_0, "r")
_ = plt.plot(t, x_1, "b")
_ = plt.xlabel("Time (s)")
_ = plt.ylabel("x (m)")
_ = plt.title("X positional paths")
_ = plt.legend(["Vehicle 1", "Vehicle 2"])
_ = plt.plot(t, y_0, "r")
_ = plt.plot(t, y_1, "b")
_ = plt.xlabel("Time (s)")
_ = plt.ylabel("y (m)")
_ = plt.title("Y positional paths")
_ = plt.legend(["Vehicle 1", "Vehicle 2"])
_ = plt.plot(t, x_0_dot, "r")
_ = plt.plot(t, x_1_dot, "b")
_ = plt.xlabel("Time (s)")
_ = plt.ylabel("x_dot (m)")
_ = plt.title("X velocity paths")
_ = plt.legend(["Vehicle 1", "Vehicle 2"])
_ = plt.plot(t, y_0_dot, "r")
_ = plt.plot(t, y_1_dot, "b")
_ = plt.xlabel("Time (s)")
_ = plt.ylabel("y_dot (m)")
_ = plt.title("Y velocity paths")
_ = plt.legend(["Vehicle 1", "Vehicle 2"])
_ = plt.plot(J_hist)
_ = plt.xlabel("Iteration")
_ = plt.ylabel("Total cost")
_ = plt.title("Total cost-to-go")
###Output
_____no_output_____ |
Week5/week-5-lasso-assignment-2.ipynb | ###Markdown
Regression Week 5: LASSO (coordinate descent) In this notebook, you will implement your very own LASSO solver via coordinate descent. You will:* Write a function to normalize features* Implement coordinate descent for LASSO* Explore effects of L1 penalty Fire up graphlab create Make sure you have the latest version of graphlab (>= 1.7)
###Code
import graphlab
graphlab.version
###Output
_____no_output_____
###Markdown
Load in house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = graphlab.SFrame('kc_house_data.gl/')
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to int, before using it below
sales['floors'] = sales['floors'].astype(int)
###Output
This non-commercial license of GraphLab Create for academic use is assigned to [email protected] and will expire on September 10, 2017.
###Markdown
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_num_data()` from the second notebook of Week 2.
###Code
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
###Output
_____no_output_____
###Markdown
Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights:
###Code
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
###Output
_____no_output_____
###Markdown
Normalize featuresIn the house dataset, features vary wildly in their relative magnitude: `sqft_living` is very large overall compared to `bedrooms`, for instance. As a result, weight for `sqft_living` would be much smaller than weight for `bedrooms`. This is problematic because "small" weights are dropped first as `l1_penalty` goes up. To give equal considerations for all features, we need to **normalize features** as discussed in the lectures: we divide each feature by its 2-norm so that the transformed feature has norm 1.Let's see how we can do this normalization easily with Numpy: let us first consider a small matrix.
###Code
X = np.array([[3.,5.,8.],[4.,12.,15.]])
print X
###Output
[[ 3. 5. 8.]
[ 4. 12. 15.]]
###Markdown
Numpy provides a shorthand for computing 2-norms of each column:
###Code
norms = np.linalg.norm(X, axis=0) # gives [norm(X[:,0]), norm(X[:,1]), norm(X[:,2])]
print norms
###Output
[ 5. 13. 17.]
###Markdown
To normalize, apply element-wise division:
###Code
print X / norms # gives [X[:,0]/norm(X[:,0]), X[:,1]/norm(X[:,1]), X[:,2]/norm(X[:,2])]
###Output
[[ 0.6 0.38461538 0.47058824]
[ 0.8 0.92307692 0.88235294]]
###Markdown
Using the shorthand we just covered, write a short function called `normalize_features(feature_matrix)`, which normalizes columns of a given feature matrix. The function should return a pair `(normalized_features, norms)`, where the second item contains the norms of original features. As discussed in the lectures, we will use these norms to normalize the test data in the same way as we normalized the training data.
###Code
def normalize_features(feature_matrix):
norms = np.linalg.norm(feature_matrix, axis=0)
return (feature_matrix/norms, norms)
###Output
_____no_output_____
###Markdown
To test the function, run the following:
###Code
features, norms = normalize_features(np.array([[3.,6.,9.],[4.,8.,12.]]))
print features
# should print
# [[ 0.6 0.6 0.6]
# [ 0.8 0.8 0.8]]
print norms
# should print
# [5. 10. 15.]
###Output
[[ 0.6 0.6 0.6]
[ 0.8 0.8 0.8]]
[ 5. 10. 15.]
###Markdown
Implementing Coordinate Descent with normalized features We seek to obtain a sparse set of weights by minimizing the LASSO cost function```SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|).```(By convention, we do not include `w[0]` in the L1 penalty term. We never want to push the intercept to zero.)The absolute value sign makes the cost function non-differentiable, so simple gradient descent is not viable (you would need to implement a method called subgradient descent). Instead, we will use **coordinate descent**: at each iteration, we will fix all weights but weight `i` and find the value of weight `i` that minimizes the objective. That is, we look for```argmin_{w[i]} [ SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|) ]```where all weights other than `w[i]` are held to be constant. We will optimize one `w[i]` at a time, circling through the weights multiple times. 1. Pick a coordinate `i` 2. Compute `w[i]` that minimizes the cost function `SUM[ (prediction - output)^2 ] + lambda*( |w[1]| + ... + |w[k]|)` 3. Repeat Steps 1 and 2 for all coordinates, multiple times For this notebook, we use **cyclical coordinate descent with normalized features**, where we cycle through coordinates 0 to (d-1) in order, and assume the features were normalized as discussed above. The formula for optimizing each coordinate is as follows:``` ┌ (ro[i] + lambda/2) if ro[i] < -lambda/2w[i] = ├ 0 if -lambda/2 <= ro[i] <= lambda/2 └ (ro[i] - lambda/2) if ro[i] > lambda/2```where```ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ].```Note that we do not regularize the weight of the constant feature (intercept) `w[0]`, so, for this weight, the update is simply:```w[0] = ro[i]``` Effect of L1 penalty Let us consider a simple model with 2 features:
###Code
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
###Output
_____no_output_____
###Markdown
Don't forget to normalize features:
###Code
simple_feature_matrix, norms = normalize_features(simple_feature_matrix)
###Output
_____no_output_____
###Markdown
We assign some random set of initial weights and inspect the values of `ro[i]`:
###Code
weights = np.array([1., 4., 1.])
###Output
_____no_output_____
###Markdown
Use `predict_output()` to make predictions on this data.
###Code
prediction = predict_output(simple_feature_matrix, weights)
###Output
_____no_output_____
###Markdown
Compute the values of `ro[i]` for each feature in this simple model, using the formula given above, using the formula:```ro[i] = SUM[ [feature_i]*(output - prediction + w[i]*[feature_i]) ]```*Hint: You can get a Numpy vector for feature_i using:*```simple_feature_matrix[:,i]```
###Code
ro = np.empty([simple_feature_matrix.shape[1],1])
for i in range(0, simple_feature_matrix.shape[1]):
ro[i] = (simple_feature_matrix[:,i] * (output - prediction + (weights[i] * simple_feature_matrix[:,i]))).sum()
###Output
_____no_output_____
###Markdown
***QUIZ QUESTION***Recall that, whenever `ro[i]` falls between `-l1_penalty/2` and `l1_penalty/2`, the corresponding weight `w[i]` is sent to zero. Now suppose we were to take one step of coordinate descent on either feature 1 or feature 2. What range of values of `l1_penalty` **would not** set `w[1]` zero, but **would** set `w[2]` to zero, if we were to take a step in that coordinate?
###Code
ro
###Output
_____no_output_____
###Markdown
***QUIZ QUESTION***What range of values of `l1_penalty` would set **both** `w[1]` and `w[2]` to zero, if we were to take a step in that coordinate? So we can say that `ro[i]` quantifies the significance of the i-th feature: the larger `ro[i]` is, the more likely it is for the i-th feature to be retained. Single Coordinate Descent Step Using the formula above, implement coordinate descent that minimizes the cost function over a single feature i. Note that the intercept (weight 0) is not regularized. The function should accept feature matrix, output, current weights, l1 penalty, and index of feature to optimize over. The function should return new weight for feature i.
###Code
def lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty):
# compute prediction
prediction = predict_output(feature_matrix, weights)
# compute ro[i] = SUM[ [feature_i]*(output - prediction + weight[i]*[feature_i]) ]
ro_i = (feature_matrix[:,i] * (output - prediction + (weights[i] * feature_matrix[:,i]))).sum()
if i == 0: # intercept -- do not regularize
new_weight_i = ro_i
elif ro_i < -l1_penalty/2.:
new_weight_i = ro_i + (l1_penalty/2)
elif ro_i > l1_penalty/2.:
new_weight_i = ro_i - (l1_penalty/2)
else:
new_weight_i = 0.
return new_weight_i
###Output
_____no_output_____
###Markdown
To test the function, run the following cell:
###Code
# should print 0.425558846691
import math
print lasso_coordinate_descent_step(1, np.array([[3./math.sqrt(13),1./math.sqrt(10)],[2./math.sqrt(13),3./math.sqrt(10)]]),
np.array([1., 1.]), np.array([1., 4.]), 0.1)
###Output
0.425558846691
###Markdown
Cyclical coordinate descent Now that we have a function that optimizes the cost function over a single coordinate, let us implement cyclical coordinate descent where we optimize coordinates 0, 1, ..., (d-1) in order and repeat.When do we know to stop? Each time we scan all the coordinates (features) once, we measure the change in weight for each coordinate. If no coordinate changes by more than a specified threshold, we stop. For each iteration:1. As you loop over features in order and perform coordinate descent, measure how much each coordinate changes.2. After the loop, if the maximum change across all coordinates is falls below the tolerance, stop. Otherwise, go back to step 1.Return weights**IMPORTANT: when computing a new weight for coordinate i, make sure to incorporate the new weights for coordinates 0, 1, ..., i-1. One good way is to update your weights variable in-place. See following pseudocode for illustration.**```for i in range(len(weights)): old_weights_i = weights[i] remember old value of weight[i], as it will be overwritten the following line uses new values for weight[0], weight[1], ..., weight[i-1] and old values for weight[i], ..., weight[d-1] weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty) use old_weights_i to compute change in coordinate ...```
###Code
def lasso_cyclical_coordinate_descent(feature_matrix, output, initial_weights, l1_penalty, tolerance):
weights = initial_weights
flag = True
while (flag):
old_weights = weights
for i in range(len(initial_weights)):
weights[i] = lasso_coordinate_descent_step(i, feature_matrix, output, weights, l1_penalty)
flag = False
for i in range(len(initial_weights)):
if (abs(old_weights[i] - weights[i]) >= tolerance):
flag = True
break
return weights
###Output
_____no_output_____
###Markdown
Using the following parameters, learn the weights on the sales dataset.
###Code
simple_features = ['sqft_living', 'bedrooms']
my_output = 'price'
initial_weights = np.zeros(3)
l1_penalty = 1e7
tolerance = 1.0
###Output
_____no_output_____
###Markdown
First create a normalized version of the feature matrix, `normalized_simple_feature_matrix`.
###Code
(simple_feature_matrix, output) = get_numpy_data(sales, simple_features, my_output)
(normalized_simple_feature_matrix, simple_norms) = normalize_features(simple_feature_matrix) # normalize features
###Output
_____no_output_____
###Markdown
Then, run your implementation of LASSO coordinate descent:
###Code
weights = lasso_cyclical_coordinate_descent(normalized_simple_feature_matrix, output,
initial_weights, l1_penalty, tolerance)
print weights
predictions = predict_output(normalized_simple_feature_matrix, weights)
RSS = np.dot(predictions - output, predictions - output).sum()
print RSS
###Output
[ 79400304.65805088 10305258.63602118 -299724.11449882]
2.70057873615e+15
###Markdown
***QUIZ QUESTIONS***1. What is the RSS of the learned model on the normalized dataset? (Hint: use the normalized feature matrix when you make predictions.)2. Which features had weight zero at convergence? Evaluating LASSO fit with more features Let us split the sales dataset into training and test sets.
###Code
(train_data,test_data) = sales.random_split(.8,seed=0)
###Output
_____no_output_____
###Markdown
Let us consider the following set of features.
###Code
all_features = ['bedrooms',
'bathrooms',
'sqft_living',
'sqft_lot',
'floors',
'waterfront',
'view',
'condition',
'grade',
'sqft_above',
'sqft_basement',
'yr_built',
'yr_renovated']
###Output
_____no_output_____
###Markdown
First, create a normalized feature matrix from the TRAINING data with these features. (Make you store the norms for the normalization, since we'll use them later)
###Code
(feature_matrix, output) = get_numpy_data(train_data, all_features, 'price')
(norm_feature_matrix, norms) = normalize_features(feature_matrix)
###Output
_____no_output_____
###Markdown
First, learn the weights with `l1_penalty=1e7`, on the training data. Initialize weights to all zeros, and set the `tolerance=1`. Call resulting weights `weights1e7`, you will need them later.
###Code
tolerance = 1
initial_weights = np.zeros(feature_matrix.shape[1])
l1_penalty = 1e7
weights1e7 = lasso_cyclical_coordinate_descent(norm_feature_matrix, output, initial_weights, l1_penalty, tolerance)
print weights1e7
###Output
[ 71114625.75280938 0. 3743972.43191673
5271064.34696085 0. 0. 7173100.28480826
7025132.06642577 -5530804.65691784 0. 394565.5843951
2242690.39485069 -2160960.47385677 0. ]
###Markdown
***QUIZ QUESTION***What features had non-zero weight in this case? Next, learn the weights with `l1_penalty=1e8`, on the training data. Initialize weights to all zeros, and set the `tolerance=1`. Call resulting weights `weights1e8`, you will need them later.
###Code
l1_penalty = 1e8
tolerance = 1
initial_weights = np.zeros(feature_matrix.shape[1])
weights1e8 = lasso_cyclical_coordinate_descent(norm_feature_matrix, output, initial_weights, l1_penalty, tolerance)
print weights1e8
###Output
[ 71114625.75280938 0. 0. 0.
0. 0. 0. 0.
0. 0. 0. 0.
0. 0. ]
###Markdown
***QUIZ QUESTION***What features had non-zero weight in this case? Finally, learn the weights with `l1_penalty=1e4`, on the training data. Initialize weights to all zeros, and set the `tolerance=5e5`. Call resulting weights `weights1e4`, you will need them later. (This case will take quite a bit longer to converge than the others above.)
###Code
l1_penalty = 1e4
tolerance = 5e5
initial_weights = np.zeros(feature_matrix.shape[1])
weights1e4 = lasso_cyclical_coordinate_descent(norm_feature_matrix, output, initial_weights, l1_penalty, tolerance)
print weights1e4
###Output
[ 71114625.75280938 3956380.10169754 4963442.47090491
5351785.20244835 -1029778.70127717 -8770183.36166059
12531492.38446381 10818930.30035373 -8852214.38705103
3691006.48690016 6552708.56440671 2946362.53466733
-11587978.82672676 3043479.31193253]
###Markdown
***QUIZ QUESTION***What features had non-zero weight in this case? Rescaling learned weights Recall that we normalized our feature matrix, before learning the weights. To use these weights on a test set, we must normalize the test data in the same way.Alternatively, we can rescale the learned weights to include the normalization, so we never have to worry about normalizing the test data: In this case, we must scale the resulting weights so that we can make predictions with *original* features: 1. Store the norms of the original features to a vector called `norms`:```features, norms = normalize_features(features)``` 2. Run Lasso on the normalized features and obtain a `weights` vector 3. Compute the weights for the original features by performing element-wise division, i.e.```weights_normalized = weights / norms```Now, we can apply `weights_normalized` to the test data, without normalizing it! Create a normalized version of each of the weights learned above. (`weights1e4`, `weights1e7`, `weights1e8`).
###Code
weights1e7_norm = weights1e7 / norms
weights1e8_norm = weights1e8 / norms
weights1e4_norm = weights1e4 / norms
print weights1e7_norm[3]
###Output
17.5724158049
###Markdown
To check your results, if you call `normalized_weights1e7` the normalized version of `weights1e7`, then:```print normalized_weights1e7[3]```should return 161.31745624837794. Evaluating each of the learned models on the test data Let's now evaluate the three models on the test data:
###Code
(test_feature_matrix, test_output) = get_numpy_data(test_data, all_features, 'price')
###Output
_____no_output_____
###Markdown
Compute the RSS of each of the three normalized weights on the (unnormalized) `test_feature_matrix`:
###Code
pred1e7 = predict_output(test_feature_matrix, weights1e7_norm)
RSS1e7 = np.dot(pred1e7 - test_output, pred1e7 - test_output).sum()
pred1e8 = predict_output(test_feature_matrix, weights1e8_norm)
RSS1e8 = np.dot(pred1e8 - test_output, pred1e8 - test_output).sum()
pred1e4 = predict_output(test_feature_matrix, weights1e4_norm)
RSS1e4 = np.dot(pred1e4 - test_output, pred1e4 - test_output).sum()
print RSS1e7
print RSS1e8
print RSS1e4
###Output
4.18467351052e+14
5.37166150034e+14
3.76207259396e+14
|
baseline_models/FET/FET.ipynb | ###Markdown
ARIMA Baseline Model
###Code
data = df[target_column]
def mean_absolute_percent_error(y_true, y_pred):
pct_error = abs(y_true - y_pred) / abs(y_true)
return pct_error.mean(axis=0) * 100
# 1 ARIMA Baseline Model
def ARIMA_Model(holdout,dataset):
# Fit a simple auto_arima model
modl = pm.auto_arima(dataset, start_p=0, start_q=0, start_P=0, start_Q=0,
max_p=5, max_q=5, max_P=5, max_Q=5, seasonal=True,
stepwise=True, suppress_warnings=True, D=10, max_D=10,
error_action='ignore')
# Create predictions for the future, evaluate on test
preds, conf_int = modl.predict(holdout, return_conf_int=True)
return preds, conf_int
# Validating the model (Sliding Window)
loop_value = int(len(data)/100)
train_window_size = 100
test_window_size = 10
step_size = train_window_size + test_window_size
arima_prediction = []
for i in range(0,loop_value):
arima_pred, arima_config = ARIMA_Model(test_window_size,data.iloc[i*train_window_size:(i+1)*train_window_size])
arima_prediction.append(arima_pred)
# Compute Real Values every 100 hours
r_value=[]
for i in range(1,loop_value+1):
v= data.iloc[i*100:i*train_window_size + test_window_size]
r_value.append(v)
# Computing metrics (MAPE)
arima_mape_list=[]
for i in range(0,len(r_value)):
mape=mean_absolute_percent_error(r_value[i],arima_prediction[i])
arima_mape_list.append(mape)
# Mean Value of MAPE
arima_MAPE = sum(arima_mape_list)/len(arima_mape_list)
# Print MAPE
print("The Mean Absolute Percentage Error in ARIMA Model is equal to",round(arima_MAPE,2))
# Train-test Split
train = data[10:]
test = data.tail(10)
# Forecasting t+10 timesteps
arima_forecast, arima_config = ARIMA_Model(10,train)
# Plot Forecasting Values
fig, ax = plt.subplots(figsize=(16, 10))
ax.plot(train[2100:].index, train.values[2100:]);
ax.plot(test.index, test.values, label='truth');
ax.plot(test.index, arima_forecast, linestyle='--', color='#ff7823');
ax.set_title("ARIMA t+10 Forecasting");
plt.savefig('ARIMA t+10 Forecasting.png')
###Output
_____no_output_____
###Markdown
Theta Baseline Model
###Code
# 2 Theta Baseline Model
# Step 1: Check for seasonality
# Step 2: Decompose Seasonality if it is deemed seasonal
# Step 3: Applying Theta Method
# Step 4: Reseasonalize the resulting forecast
def sesThetaF(y, s_period , h = 10, level = np.array([90,95,99])):
"""
@param y : array-like time series data
@param s_period : the no. of observations before seasonal pattern repeats
@param h : number of period for forcasting
@param level: confidence levels for prediction intervals
"""
if not s_period:
print('ERROR: s_period variable only accepts positive integer.')
sys.exit()
fcast = {} # store result
# Check seasonality
x = y.copy()
n = y.index.size
m = s_period
if m > 1 and n > 2 * m:
r = (acf(x, nlags = m))[1:]
temp = np.delete(r, m-1)
stat = np.sqrt((1+ 2 * np.sum(np.square(temp))) / n)
seasonal = (abs(r[m-1])/stat) > norm.cdf(0.95)
else:
seasonal = False
# Seasonal Decomposition
origx = x.copy()
if seasonal:
decomp = seasonal_decompose(x, model = 'multiplicative')
if decomp.seasonal < 1e-10 :
warnings.warn('Seasonal indexes equal to zero. Using non-seasonal Theta method')
else:
x = decomp.observed/decomp.seasonal
# Find theta lines
model = SimpleExpSmoothing(x).fit()
fcast['mean'] = model.forecast(h)
num = np.array(range(0,n))
temp = LinearRegression().fit(num.reshape(-1,1),x).coef_
temp = temp/2
alpha = np.maximum(1e-10, model.params['smoothing_level'])
fcast['mean'] = fcast['mean'] + temp * (np.array(range(0,h)) + (1 - (1 - alpha)**n)/alpha)
# Reseasonalize
if seasonal:
fcast['mean'] = fcast['mean'] * np.repeat(decomp.seasonal[-m:], (1 + h//m))[:h]
fcast['fitted'] = model.predict(x.index[0], x.index[n-1]) * decomp.seasonal
else:
fcast['fitted'] = model.predict(x.index[0], x.index[n-1])
fcast['residuals'] = origx - fcast['fitted']
return fcast
# Prediction Intervals
data = pd.Series(df['close']).asfreq("H")
data.fillna(method='ffill', inplace=True)
np.all(np.isfinite(data))
# Validating the model (Sliding Window)
theta_pred_list=[]
for i in range(0,loop_value):
theta_pred = sesThetaF(data[i*100:(i+1)*100],s_period=1,h = 10)
theta_pred_list.append(theta_pred['mean'])
r_value=[]
for i in range(1,loop_value+1):
v= data.iloc[i*100:i*train_window_size + test_window_size]
r_value.append(v)
# Computing metrics (MAPE)
theta_mape_list=[]
for i in range(0,len(r_value)):
mape=mean_absolute_percent_error(r_value[i],theta_pred_list[i])
theta_mape_list.append(mape)
# Mean Value of MAPE
theta_MAPE = sum(theta_mape_list)/len(theta_mape_list)
# Print MAPE
print("The Mean Absolute Percentage Error in Theta Model is equal to",round(theta_MAPE,2))
# Forecasting t+10 timesteps
theta_conf = sesThetaF(data,s_period=1,h = 10)
# Plot Forecasting Values
mean = theta_conf['mean']
fitted = theta_conf['fitted']
residuals = theta_conf['residuals']
plt.figure(figsize = (16,10))
plt.plot(fitted, marker = '.', color = 'red', label = 'In-sample Fitted')
plt.plot(mean, marker = '*', color = 'blue', label = 'Forecast')
plt.plot(residuals, marker = '', color = 'green', label = 'Residuals')
plt.title('Standard Theta Model')
plt.legend()
plt.show()
plt.savefig('Standard Theta Model t+10 Forecasting.png')
###Output
_____no_output_____
###Markdown
HW Exponential Smoothing Baseline Model
###Code
# Dataset pre-processing
data = df[target_column]
data = pd.Series(df['close']).asfreq("H")
np.all(np.isfinite(data))
data.fillna(method='ffill', inplace=True)
np.all(np.isfinite(data))
# 3 HWES Baseline Model
exp_smooth_pred_list=[]
for i in range(0,loop_value):
model = ExponentialSmoothing(data[i*100:(i+1)*100],freq="H")
model_fit = model.fit()
# make prediction
yhat = model_fit.predict(100, 109)
exp_smooth_pred_list.append(yhat)
exp_smooth_mape_list=[]
for i in range(0,len(r_value)):
mape=mean_absolute_percent_error(r_value[i],exp_smooth_pred_list[i])
exp_smooth_mape_list.append(mape)
exp_smooth_MAPE = sum(exp_smooth_mape_list)/len(exp_smooth_mape_list)
# Print MAPE
print("The Mean Absolute Percentage Error in Exponential Smoothing Method is equal to",round(exp_smooth_MAPE,2))
# Train-test Split
train = data[10:]
test = data.tail(10)
# Forecasting t+10 timesteps
model = ExponentialSmoothing(train,freq="H")
model_fit = model.fit()
# make prediction
yhat = model_fit.predict(len(train), len(train)+9)
# Plot Forecasting Values
fig, ax = plt.subplots(figsize=(16, 10))
ax.plot(train[2100:].index, train.values[2100:]);
ax.plot(test.index, test.values, label='truth');
# ax.plot(test.index, yhat, linestyle='--', color='#ff7823');
ax.set_title("Holt-Winter's Seasonal Smoothing");
plt.savefig("Holt-Winter's Seasonal Smoothing t+10 Forecasting.png")
###Output
_____no_output_____ |
docs/examples/simple_tissue_detection.ipynb | ###Markdown
Tissue Detection**Overview:** This includes tools to detect tissue from an item (slide) using its thumbnail. The basic functionality includes a series of gaussian smoothing and otsu thresholding steps to detect background versus foreground pixels. Optionally, an initial step is performed whereby color deconvolution is used to deparate hematoxylin and eosin stains (assuming H&E stained slides) to make sure only cellular areas are segmented. This proves to be useful in getting rid of sharpie markers. A size threshold is used to keep only largest contiguous tissue regions.**Where to look?**```|_ histomicstk/ |_saliency/ |_tissue_detection.py |_tests/ |_test_saliency.py```
###Code
import girder_client
import numpy as np
from matplotlib import pylab as plt
from matplotlib.colors import ListedColormap
from histomicstk.saliency.tissue_detection import (
get_slide_thumbnail, get_tissue_mask)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Constants and Prepwork
###Code
APIURL = 'http://candygram.neurology.emory.edu:8080/api/v1/'
# SAMPLE_SLIDE_ID = '5d586d57bd4404c6b1f28640'
SAMPLE_SLIDE_ID = "5d817f5abd4404c6b1f744bb"
gc = girder_client.GirderClient(apiUrl=APIURL)
# gc.authenticate(interactive=True)
_ = gc.authenticate(apiKey='kri19nTIGOkWH01TbzRqfohaaDWb6kPecRqGmemb')
###Output
_____no_output_____
###Markdown
First, let's fetch the slide thumbnail
###Code
thumbnail_rgb = get_slide_thumbnail(gc, SAMPLE_SLIDE_ID)
plt.imshow(thumbnail_rgb)
###Output
_____no_output_____
###Markdown
(Optional) Color normalization of thumbnailSee documentation for ``color_normalization`` module. Now we fetch the tissue mask This is the method you want to use
###Code
print(get_tissue_mask.__doc__)
###Output
Get binary tissue mask from slide thumbnail.
Parameters
-----------
thumbnail_rgb : np array
(m, n, 3) nd array of thumbnail RGB image
deconvolve_first : bool
use hematoxylin channel to find cellular areas?
This will make things ever-so-slightly slower but is better in
getting rid of sharpie marker (if it's green, for example).
Sometimes things work better without it, though.
stain_matrix_method : str
see deconv_color method in seed_utils
n_thresholding_steps : int
number of gaussian smoothign steps
sigma : float
sigma of gaussian filter
min_size : int
minimum size (in pixels) of contiguous tissue regions to keep
Returns
--------
np bool array
largest contiguous tissue region.
np int32 array
each unique value represents a unique tissue region
###Markdown
Get the tissue masks
###Code
labeled, mask = get_tissue_mask(
thumbnail_rgb, deconvolve_first=True,
n_thresholding_steps=2, sigma=0., min_size=30)
###Output
/home/mtageld/Desktop/HistomicsTK/histomicstk/preprocessing/color_conversion/rgb_to_sda.py:48: RuntimeWarning: divide by zero encountered in log
im_sda = -np.log(im_rgb/(1.*I_0)) * 255/np.log(I_0)
###Markdown
Visualize the result
###Code
vals = np.random.rand(256,3)
vals[0, ...] = [0.9, 0.9, 0.9]
cMap = ListedColormap(1 - vals)
f, ax = plt.subplots(1, 3, figsize=(20, 20))
ax[0].imshow(thumbnail_rgb)
ax[1].imshow(labeled, cmap=cMap) # all tissue regions
ax[2].imshow(mask, cmap=cMap) # largest tissue region
plt.show()
###Output
_____no_output_____
###Markdown
Note effect of hyperparameters
###Code
for deconvolve_first in [False, True]:
for n_thresholding_steps in [1, 2]:
labeled, mask = get_tissue_mask(
thumbnail_rgb, deconvolve_first=deconvolve_first,
n_thresholding_steps=n_thresholding_steps, sigma=0., min_size=30)
f, ax = plt.subplots(1, 3, figsize=(20, 5))
ax[0].imshow(thumbnail_rgb)
ax[1].imshow(labeled, cmap=cMap)
ax[2].imshow(mask, cmap=cMap)
plt.suptitle("deconvolve = %s, n_thresholding_steps = %d" % (deconvolve_first, n_thresholding_steps), fontsize=20)
plt.show()
###Output
_____no_output_____ |
time_analysis.ipynb | ###Markdown
Time samples analysisRead from file the samples:
###Code
file="time_optimized_0_1000"
f = open(file+".txt")
l=[]
for line in f:
l.append([w for w in line.replace("\t", " ").replace("\n", "").split(" ") if w != ''])
f.close()
# l[:3]
l[0][0]
###Output
_____no_output_____
###Markdown
Convert the data to CSV (easier to pass them to pandas):
###Code
ll = []
i=0
while i<len(l):
if "__" in l[i][0]:
print(l[i])
i+=1
else:
v, p, e = l[i][0][:-4].split("_")
ll.append([int(v), float(p), int(e),
float(l[i+2][1][:-1]), float(l[i+3][1][:-1]), float(l[i+4][1][:-1]), float(l[i+5][1][:-1]),
int(l[i+6][1]), int(l[i+7][1])])
i+=8
ll[1000]
import csv
f = open(file+".csv", "w")
f.write("vertices,percentage,edges,tarjan,nuutila,pearce,pearceNR,components,correct\n")
w = csv.writer(f)
w.writerows(ll)
f.close()
###Output
_____no_output_____
###Markdown
Load samples with pandas:
###Code
import pandas as pd
t = pd.read_csv(file+".csv")
print ("Number of samples : ",t.count())
print("Incorrect results : ", t.correct[t.correct==0].count())
t.describe()
###Output
_____no_output_____
###Markdown
Remove outliers above the 99-th percentile for all the algorithms:
###Code
for alg in ["tarjan","nuutila","pearce", "pearceNR"]:
qt1=t[alg].quantile(0.99)
print(alg, qt1)
t = t[t[alg]<qt1]
###Output
tarjan 7.099999999999999e-05
nuutila 5.1e-05
pearce 4.1e-05
pearceNR 0.00010002000000000044
###Markdown
Average samples with same number of vertices and edges:
###Code
t= t[["vertices","edges","tarjan","nuutila", "pearce", "pearceNR"]].groupby(by=["vertices","edges"], as_index=False).mean()
t["v+e"]=t["vertices"]+t["edges"]
t.describe()
###Output
_____no_output_____
###Markdown
Plots In (V+E,t) space averaging 10 samples:
###Code
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
t = t.sort_values(by=["v+e","edges","vertices"])
t.groupby(np.arange(len(t))//10).mean().plot(x="v+e", y=["tarjan","nuutila","pearce", "pearceNR"],figsize=(20,20), subplots=True)
###Output
_____no_output_____
###Markdown
In (V+E,t) space averaging 100 samples:
###Code
t.groupby(np.arange(len(t))//10).max().groupby(np.arange(len(t)/10)//10).mean().plot(x="v+e", y=["tarjan","nuutila","pearce", "pearceNR"],figsize=(20,20), subplots=True)
t.groupby(np.arange(len(t))//10).max().groupby(np.arange(len(t)/10)//10).mean().plot(x="v+e", y=["tarjan","nuutila","pearce", "pearceNR"],figsize=(20,20), subplots=False)
import plotly.plotly as py
import plotly.graph_objs as go
import plotly
import pandas as pd
df = t
plotly.tools.set_credentials_file(username='pscorso93', api_key='K4XqlXUNVsiWVSEtsFEZ')
###Output
_____no_output_____
###Markdown
Plots in (V,E,t) space: Tarjan:
###Code
import plotly.plotly as py
import plotly
import plotly.graph_objs as go
from scipy.interpolate import griddata
x1 = np.linspace(t['vertices'].min(), t['vertices'].max(), len(t['vertices'].unique()))
y1 = np.linspace(t['edges'].min(), t['edges'].max(), len(t['edges'].unique()))
x2, y2 = np.meshgrid(x1, y1)
z2 = griddata((df['vertices'], df['edges']), df['tarjan'], (x2, y2), method='cubic')
data = [
go.Surface(
x=x2,
y=y2,
z=z2
)
]
layout = go.Layout(
title='Tarjan',
autosize=False,
width=1000,
height=1000,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(data, filename='tarjan.html')
py.iplot(fig, filename='elevations-3d-surface')
###Output
_____no_output_____
###Markdown
Nuutila:
###Code
x1 = np.linspace(t['vertices'].min(), t['vertices'].max(), len(t['vertices'].unique()))
y1 = np.linspace(t['edges'].min(), t['edges'].max(), len(t['edges'].unique()))
x2, y2 = np.meshgrid(x1, y1)
z2 = griddata((df['vertices'], df['edges']), df['nuutila'], (x2, y2), method='cubic')
data = [
go.Surface(
x=x2,
y=y2,
z=z2
)
]
layout = go.Layout(
title='Nuutila',
autosize=False,
width=1000,
height=1000,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(data, filename='nuutila.html')
py.iplot(fig, filename='elevations-3d-surface')
###Output
_____no_output_____
###Markdown
Pearce:
###Code
x1 = np.linspace(t['vertices'].min(), t['vertices'].max(), len(t['vertices'].unique()))
y1 = np.linspace(t['edges'].min(), t['edges'].max(), len(t['edges'].unique()))
x2, y2 = np.meshgrid(x1, y1)
z2 = griddata((df['vertices'], df['edges']), df['pearce'], (x2, y2), method='cubic')
data = [
go.Surface(
x=x2,
y=y2,
z=z2
)
]
layout = go.Layout(
title='Pearce',
autosize=False,
width=1000,
height=1000,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(data, filename='pearce.html')
py.iplot(fig, filename='elevations-3d-surface')
x1 = np.linspace(t['vertices'].min(), t['vertices'].max(), len(t['vertices'].unique()))
y1 = np.linspace(t['edges'].min(), t['edges'].max(), len(t['edges'].unique()))
x2, y2 = np.meshgrid(x1, y1)
z2 = griddata((df['vertices'], df['edges']), df['pearceNR'], (x2, y2), method='cubic')
data = [
go.Surface(
x=x2,
y=y2,
z=z2
)
]
layout = go.Layout(
title='PearceNR',
autosize=False,
width=1000,
height=1000,
margin=dict(
l=65,
r=50,
b=65,
t=90
)
)
fig = go.Figure(data=data, layout=layout)
plotly.offline.plot(data, filename='pearceNR.html')
py.iplot(fig, filename='elevations-3d-surface')
###Output
_____no_output_____ |
Python/JCAE_Major/appendix_formatting.ipynb | ###Markdown
Select Features
###Code
cleaned_df = df[valid_features]
cleaned_df.shape
cleaned_df
import selection
importlib.reload(selection)
label = np.genfromtxt('Input/output_13.csv', delimiter=',')[::149,-1]
label.shape
filtered_df, relevance_table = selection.select_features(cleaned_df, label, n_jobs=0)
relevance_table
###Output
_____no_output_____
###Markdown
Modelo
###Code
#def tsfresh_ensemble(output_id):
output_id = 9
if True:
# Loading the required input
full_data = np.genfromtxt('Input/Output_{}.csv'.format(output_id),
delimiter=',')
L, W = full_data.shape
data = full_data[:,2:-1]
info = full_data[:,0:2]
n_measures = int(max(info[:,1]))
n_timeseries = int(max(info[:,0]))
label = full_data[::n_measures,-1]
scaler = MinMaxScaler(feature_range=(-1,1)).fit(data)
data = scaler.transform(data)
with open('Kernel/scaler.pkl', 'wb') as f:
pickle.dump(scaler, f)
full_data = np.concatenate((info,data), axis=1)
divisions = 5
idx = np.random.choice(range(n_timeseries),n_timeseries,replace=False)
idx_division = np.array_split(idx,divisions)
for i,div in enumerate(idx_division):
div.sort()
indices = [d2 for d1 in div for d2 in range(d1*n_measures,(d1+1)*n_measures)]
ensemble_data = full_data[indices,:]
ensemble_label = label[div]
df = pd.DataFrame(ensemble_data, columns= ['id','time'] +
['Sensor_' + str(x) for x in range(1,W-2)])
extracted_features = tsfresh.extract_features(df, column_id="id", column_sort="time", n_jobs=0)
features = extracted_features.columns
nan_columns = []
for col in features:
nan_test = np.isnan(extracted_features.loc[:,col].values)
if any(nan == True for nan in nan_test):
nan_columns.append(col)
print('Percentage of invalid features: ', len(nan_columns)*100/len(features))
cleaned_features = features.drop(nan_columns)
cleaned_df = extracted_features[cleaned_features]
filtered_df, relevance_table = selection.select_features(cleaned_df, ensemble_label, n_jobs=0)
relevance_table.fillna(value=10)
if i == 0:
relevance_table_final = relevance_table.copy()
extracted_features_final = extracted_features.copy()
else:
relevance_table_final.p_value = relevance_table_final.p_value + relevance_table.p_value
extracted_features_final = pd.concat([extracted_features_final,extracted_features], axis=0)
extracted_features_final = extracted_features_final.sort_index()
relevance_table_final.p_value = relevance_table_final.p_value/divisions
relevance_table_final.relevant = relevance_table_final.p_value < 0.005
relevant_features = relevance_table_final[relevance_table_final.relevant].feature
extracted_features_final = extracted_features_final[relevant_features]
kind_to_fc_parameters = from_columns(relevant_features)
with open('Kernel/kind_to_fc_parameters.pkl', 'wb') as f:
pickle.dump(kind_to_fc_parameters, f)
with open('Kernel/columns.pkl', 'wb') as f:
pickle.dump(relevant_features.keys().tolist(), f)
with open('Kernel/final_target_{}.pkl'.format(output_id), 'wb') as f:
pickle.dump(label, f)
Output = {'FeaturesFiltered': extracted_features_final,
'FinalTarget': label,
'ID': int(output_id)}
#return Output
relevance_table_final
#def dynamic_tsfresh (output_id=0, mode='prototype'):
output_id = 9
if True:
with open('Kernel/scaler.pkl', 'rb') as f:
scaler = pickle.load(f)
# Loading streaming data
total_data = np.genfromtxt('Input/Output_' + str(output_id) + '.csv',delimiter=',')
data = total_data[:,2:-1]
info = total_data[:,0:2]
data = scaler.transform(data)
total_data = np.concatenate((info,data), axis=1)
df = pd.DataFrame(total_data, columns= ['id','time'] +
['Sensor_' + str(x) for x in range(1,(total_data.shape[1]-1))])
# Loading feature dictionary
with open('Kernel/kind_to_fc_parameters.pkl', 'rb') as f:
kind_to_fc_parameters = pickle.load(f)
# Loading column names
with open('Kernel/columns.pkl', 'rb') as f:
original_columns = pickle.load(f)
extracted_features = tsfresh.extract_features(df, column_id="id", column_sort="time", n_jobs=0)
final_features = extracted_features[original_columns]
#return impute(final_features), extracted_features
final_features
(final_features == Output['FeaturesFiltered']).sum().sum()
560*136
###Output
_____no_output_____ |
Contents/Labs/K-Means Clustering.ipynb | ###Markdown
k-means Clustering IntroductionThere are many models for clustering out there. In this lab, we will be presenting the model that is considered the one of the simplest model among them. Despite its simplicity, *k*-means is vastly used for clustering in many data science applications, especially useful if you need to quickly discover insights from unlabeled data.Some real-world applications of *k*-means include:- customer segmentation,- understand what the visitors of a website are trying to accomplish,- pattern recognition, and,- data compression.In this lab, we will learn *k*-means clustering with 3 examples:- *k*-means on a randomly generated dataset.- Using *k*-means for customer segmentation. Table of Contents1. k-means on a Randomly Generated Dataset 2. Using k for Customer Segmentation Before we start with the main lab content, let's download all the dependencies that we will need.
###Code
import random # library for random number generation
import numpy as np # library for vectorized computation
import pandas as pd # library to process data as dataframes
import matplotlib.pyplot as plt # plotting library
# backend for rendering plots within the browser
%matplotlib inline
from sklearn.cluster import KMeans
from sklearn.datasets.samples_generator import make_blobs
print('Libraries imported.')
###Output
Libraries imported.
###Markdown
1. *k*-means on a Randomly Generated Dataset Let's first demonstrate how *k*-means works with an example of engineered datapoints. 30 data points belonging to 2 different clusters (x1 is the first feature and x2 is the second feature)
###Code
# data
x1 = [-4.9, -3.5, 0, -4.5, -3, -1, -1.2, -4.5, -1.5, -4.5, -1, -2, -2.5, -2, -1.5, 4, 1.8, 2, 2.5, 3, 4, 2.25, 1, 0, 1, 2.5, 5, 2.8, 2, 2]
x2 = [-3.5, -4, -3.5, -3, -2.9, -3, -2.6, -2.1, 0, -0.5, -0.8, -0.8, -1.5, -1.75, -1.75, 0, 0.8, 0.9, 1, 1, 1, 1.75, 2, 2.5, 2.5, 2.5, 2.5, 3, 6, 6.5]
print('Datapoints defined!')
###Output
Datapoints defined!
###Markdown
Define a function that assigns each datapoint to a cluster
###Code
colors_map = np.array(['b', 'r'])
def assign_members(x1, x2, centers):
compare_to_first_center = np.sqrt(np.square(np.array(x1) - centers[0][0]) + np.square(np.array(x2) - centers[0][1]))
compare_to_second_center = np.sqrt(np.square(np.array(x1) - centers[1][0]) + np.square(np.array(x2) - centers[1][1]))
class_of_points = compare_to_first_center > compare_to_second_center
colors = colors_map[class_of_points + 1 - 1]
return colors, class_of_points
print('assign_members function defined!')
###Output
assign_members function defined!
###Markdown
Define a function that updates the centroid of each cluster
###Code
# update means
def update_centers(x1, x2, class_of_points):
center1 = [np.mean(np.array(x1)[~class_of_points]), np.mean(np.array(x2)[~class_of_points])]
center2 = [np.mean(np.array(x1)[class_of_points]), np.mean(np.array(x2)[class_of_points])]
return [center1, center2]
print('assign_members function defined!')
###Output
assign_members function defined!
###Markdown
Define a function that plots the data points along with the cluster centroids
###Code
def plot_points(centroids=None, colors='g', figure_title=None):
# plot the figure
fig = plt.figure(figsize=(15, 10)) # create a figure object
ax = fig.add_subplot(1, 1, 1)
centroid_colors = ['bx', 'rx']
if centroids:
for (i, centroid) in enumerate(centroids):
ax.plot(centroid[0], centroid[1], centroid_colors[i], markeredgewidth=5, markersize=20)
plt.scatter(x1, x2, s=500, c=colors)
# define the ticks
xticks = np.linspace(-6, 8, 15, endpoint=True)
yticks = np.linspace(-6, 6, 13, endpoint=True)
# fix the horizontal axis
ax.set_xticks(xticks)
ax.set_yticks(yticks)
# add tick labels
xlabels = xticks
ax.set_xticklabels(xlabels)
ylabels = yticks
ax.set_yticklabels(ylabels)
# style the ticks
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.tick_params('both', length=2, width=1, which='major', labelsize=15)
# add labels to axes
ax.set_xlabel('x1', fontsize=20)
ax.set_ylabel('x2', fontsize=20)
# add title to figure
ax.set_title(figure_title, fontsize=24)
plt.show()
print('plot_points function defined!')
###Output
plot_points function defined!
###Markdown
Initialize *k*-means - plot data points
###Code
plot_points(figure_title='Scatter Plot of x2 vs x1')
###Output
_____no_output_____
###Markdown
Initialize *k*-means - randomly define clusters and add them to plot
###Code
centers = [[-2, 2], [2, -2]]
plot_points(centers, figure_title='k-means Initialization')
###Output
_____no_output_____
###Markdown
Run *k*-means (4-iterations only)
###Code
number_of_iterations = 4
for i in range(number_of_iterations):
input('Iteration {} - Press Enter to update the members of each cluster'.format(i + 1))
colors, class_of_points = assign_members(x1, x2, centers)
title = 'Iteration {} - Cluster Assignment'.format(i + 1)
plot_points(centers, colors, figure_title=title)
input('Iteration {} - Press Enter to update the centers'.format(i + 1))
centers = update_centers(x1, x2, class_of_points)
title = 'Iteration {} - Centroid Update'.format(i + 1)
plot_points(centers, colors, figure_title=title)
###Output
Iteration 1 - Press Enter to update the members of each cluster
###Markdown
Now, we have visually observed how k-means works, let's look at an example with many more datapoints. For this example, we will use the random library to generate thousands of datapoints. Generating the Data First, we need to set up a random seed. We use the Numpy's **random.seed()** function, and we will set the seed to 0. In other words, **random.seed(0)**.
###Code
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Next we will be making *random clusters* of points by using the **make_blobs** class. The **make_blobs** class can take in many inputs, but we will use these specific ones. Input n_samples: The total number of points equally divided among clusters. Value will be: 5000 centers: The number of centers to generate, or the fixed center locations. Value will be: [[4, 4], [-2, -1], [2, -3],[1,1]] cluster_std: The standard deviation of the clusters. Value will be: 0.9 Output X: Array of shape [n_samples, n_features]. (Feature Matrix) The generated samples. y: Array of shape [n_samples]. (Response Vector) The integer labels for cluster membership of each sample.
###Code
X, y = make_blobs(n_samples=5000, centers=[[4, 4], [-2, -1], [2, -3], [1, 1]], cluster_std=0.9)
###Output
_____no_output_____
###Markdown
Display the scatter plot of the randomly generated data.
###Code
plt.figure(figsize=(15, 10))
plt.scatter(X[:, 0], X[:, 1], marker='.')
###Output
_____no_output_____
###Markdown
Setting up *k*-means Now that we have our random data, let's set up our *k*-means clustering. The KMeans class has many parameters that can be used, but we will use these three: init: Initialization method of the centroids. Value will be: "k-means++". k-means++ selects initial cluster centers for k-means clustering in a smart way to speed up convergence. n_clusters: The number of clusters to form as well as the number of centroids to generate. Value will be: 4 (since we have 4 centers) n_init: Number of times the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia. Value will be: 12 Initialize KMeans with these parameters, where the output parameter is called **k_means**.
###Code
k_means = KMeans(init="k-means++", n_clusters=4, n_init=12)
###Output
_____no_output_____
###Markdown
Now let's fit the KMeans model with the feature matrix we created above, X .
###Code
k_means.fit(X)
###Output
_____no_output_____
###Markdown
Now let's grab the labels for each point in the model using KMeans **.labels\_** attribute and save it as **k_means_labels**.
###Code
k_means_labels = k_means.labels_
k_means_labels
###Output
_____no_output_____
###Markdown
We will also get the coordinates of the cluster centers using KMeans **.cluster\_centers\_** and save it as **k_means_cluster_centers**.
###Code
k_means_cluster_centers = k_means.cluster_centers_
k_means_cluster_centers
###Output
_____no_output_____
###Markdown
Visualizing the Resulting Clusters So now that we have the random data generated and the KMeans model initialized, let's plot them and see what the clusters look like. Please read through the code and comments to understand how to plot the model.
###Code
# initialize the plot with the specified dimensions.
fig = plt.figure(figsize=(15, 10))
# colors uses a color map, which will produce an array of colors based on
# the number of labels. We use set(k_means_labels) to get the
# unique labels.
colors = plt.cm.Spectral(np.linspace(0, 1, len(set(k_means_labels))))
# create a plot
ax = fig.add_subplot(1, 1, 1)
# loop through the data and plot the datapoints and centroids.
# k will range from 0-3, which will match the number of clusters in the dataset.
for k, col in zip(range(len([[4,4], [-2, -1], [2, -3], [1, 1]])), colors):
# create a list of all datapoints, where the datapoitns that are
# in the cluster (ex. cluster 0) are labeled as true, else they are
# labeled as false.
my_members = (k_means_labels == k)
# define the centroid, or cluster center.
cluster_center = k_means_cluster_centers[k]
# plot the datapoints with color col.
ax.plot(X[my_members, 0], X[my_members, 1], 'w', markerfacecolor=col, marker='.')
# plot the centroids with specified color, but with a darker outline
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=6)
# title of the plot
ax.set_title('KMeans')
# remove x-axis ticks
ax.set_xticks(())
# remove y-axis ticks
ax.set_yticks(())
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
2. Using *k*-means for Customer Segmentation Imagine that you have a customer dataset, and you are interested in exploring the behavior of your customers using their historical data.Customer segmentation is the practice of partitioning a customer base into groups of individuals that have similar characteristics. It is a significant strategy as a business can target these specific groups of customers and effectively allocate marketing resources. For example, one group might contain customers who are high-profit and low-risk, that is, more likely to purchase products, or subscribe to a service. A business task is to retain those customers. Another group might include customers from non-profit organizations, and so on. Downloading Data Let's download the data and save it as a CSV file called **customer_segmentation.csv**
###Code
url = "https://cocl.us/customer_dataset"
print('Data downloaded!')
###Output
Data downloaded!
###Markdown
Now that the data is downloaded, let's read it into a *pandas* dataframe.
###Code
url = "https://cocl.us/customer_dataset"
customers_df = pd.read_csv(url)
customers_df.head()
###Output
_____no_output_____
###Markdown
Pre-processing As you can see, **Address** in this dataset is a categorical variable. k-means algorithm isn't directly applicable to categorical variables because Euclidean distance function isn't really meaningful for discrete variables. So, lets drop this feature and run clustering.
###Code
df = customers_df.drop('Address', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Now let's normalize the dataset. But why do we need normalization in the first place? Normalization is a statistical method that helps mathematical-based algorithms interpret features with different magnitudes and distributions equally. We use **StandardScaler()** to normalize our dataset.
###Code
from sklearn.preprocessing import StandardScaler
X = df.values[:,1:]
X = np.nan_to_num(X)
cluster_dataset = StandardScaler().fit_transform(X)
cluster_dataset
###Output
_____no_output_____
###Markdown
Modeling Let's run our model and group our customers into three clusters.
###Code
num_clusters = 3
k_means = KMeans(init="k-means++", n_clusters=num_clusters, n_init=12)
k_means.fit(cluster_dataset)
labels = k_means.labels_
print(labels)
###Output
[0 1 2 0 1 1 0 0 0 1 2 0 0 0 2 0 0 0 1 0 0 0 2 1 1 0 0 0 0 0 0 1 2 0 0 0 2
2 0 1 2 1 0 1 0 1 0 0 0 0 1 1 2 0 2 2 2 0 0 0 1 0 1 1 0 0 0 2 0 2 0 0 0 0
0 0 0 0 1 0 0 2 1 0 1 0 0 0 2 2 0 0 2 2 0 0 0 0 2 0 2 1 0 2 2 1 0 0 0 0 0
0 0 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 1 0 0 2
0 0 0 1 2 0 0 0 0 1 2 2 0 1 0 0 0 0 0 0 0 0 1 0 0 2 0 2 0 0 2 1 2 0 0 1 2
1 0 0 0 0 0 1 0 2 0 0 0 1 1 0 1 0 2 0 0 2 0 1 0 2 0 0 0 0 0 2 2 1 0 0 2 1
0 0 0 0 1 0 0 2 0 0 0 0 1 0 0 2 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 2 0 2 0
0 0 1 0 2 1 2 0 1 0 0 2 0 0 0 0 2 2 2 0 0 0 1 0 0 1 0 1 0 0 1 0 0 0 2 0 0
2 0 2 1 0 0 0 0 2 0 0 2 2 0 0 0 0 0 0 0 0 2 0 2 1 0 2 0 0 0 2 2 0 0 0 1 2
0 0 2 0 1 0 0 0 0 0 2 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 2 0 2 0 0 0 1 1 0
2 0 1 2 2 0 0 0 2 0 0 0 0 0 1 0 1 0 0 0 0 2 0 2 0 0 0 1 0 0 0 0 2 0 0 2 2
1 0 0 0 0 0 2 2 0 1 2 1 0 0 2 0 0 1 1 0 2 0 0 1 0 2 0 1 0 0 0 1 0 0 0 0 1
0 2 0 0 0 0 1 2 0 0 1 0 2 0 0 1 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 2 0 0 0 2 0
2 0 0 0 0 0 1 2 2 0 1 0 1 0 0 2 1 0 2 2 2 1 1 2 0 0 2 0 2 2 0 2 1 0 0 2 0
2 1 2 0 0 2 0 0 2 2 2 0 0 0 1 1 0 0 2 0 0 2 1 0 2 0 0 0 2 0 1 0 1 1 0 1 0
0 1 0 2 0 0 0 0 2 2 0 1 0 1 0 0 1 0 2 0 2 0 2 2 2 1 2 0 0 0 2 0 0 0 1 0 1
0 2 2 0 0 0 0 0 0 0 2 1 0 1 0 0 2 0 0 0 2 0 0 2 2 2 2 0 1 0 2 2 0 0 0 0 1
1 0 2 0 0 1 0 0 1 0 1 0 0 1 2 1 1 1 2 0 0 2 0 1 1 0 0 0 1 2 0 0 0 0 1 0 0
0 0 0 2 0 0 1 0 0 1 0 0 0 0 0 0 2 1 0 0 2 0 0 0 0 2 0 1 0 0 1 0 0 2 0 2 0
2 2 0 0 0 1 2 1 0 1 1 0 2 0 1 0 1 0 0 0 0 0 1 0 2 0 0 1 1 0 0 1 0 0 0 0 0
0 0 0 2 0 0 1 0 0 0 0 0 0 0 2 0 0 0 1 2 1 1 0 0 0 2 0 0 0 2 2 0 2 0 0 0 1
0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 2 2 0 2 0 0 0 0 1 2 0 0 0 0 0 1 2 0 0 0 2
0 0 2 0 0 0 0 0 0 2 2 1 1 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1]
###Markdown
Insights Note that each row in our dataset represents a customer, and therefore, each row is assigned a label.
###Code
df["Labels"] = labels
df.head(5)
###Output
_____no_output_____
###Markdown
We can easily check the centroid values by averaging the features in each cluster.
###Code
df.groupby('Labels').mean()
###Output
_____no_output_____ |
docs_src/gen_doc.nbtest.ipynb | ###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_dir_tests)
show_doc(lookup_db)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(direct_test_match)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_dir_tests)
show_doc(lookup_db)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(direct_test_match)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_tests)
###Output
_____no_output_____
###Markdown
[`find_tests`](/gen_doc.nbtest.htmlfind_tests)
###Code
show_doc(lookup_db)
###Output
_____no_output_____
###Markdown
[`lookup_db`](/gen_doc.nbtest.htmllookup_db)
###Code
show_doc(find_test_matches)
###Output
_____no_output_____
###Markdown
[`find_test_matches`](/gen_doc.nbtest.htmlfind_test_matches)
###Code
show_doc(find_test_files)
###Output
_____no_output_____
###Markdown
[`find_test_files`](/gen_doc.nbtest.htmlfind_test_files)
###Code
show_doc(direct_test_match)
###Output
_____no_output_____
###Markdown
[`direct_test_match`](/gen_doc.nbtest.htmldirect_test_match)
###Code
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_tests)
###Output
_____no_output_____
###Markdown
[`find_tests`](/gen_doc.nbtest.htmlfind_tests)
###Code
show_doc(lookup_db)
###Output
_____no_output_____
###Markdown
[`lookup_db`](/gen_doc.nbtest.htmllookup_db)
###Code
show_doc(find_test_matches)
###Output
_____no_output_____
###Markdown
[`find_test_matches`](/gen_doc.nbtest.htmlfind_test_matches)
###Code
show_doc(find_test_files)
###Output
_____no_output_____
###Markdown
[`find_test_files`](/gen_doc.nbtest.htmlfind_test_files)
###Code
show_doc(direct_test_match)
###Output
_____no_output_____
###Markdown
[`direct_test_match`](/gen_doc.nbtest.htmldirect_test_match)
###Code
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_dir_tests)
show_doc(lookup_db)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(direct_test_match)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_dir_tests)
show_doc(lookup_db)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(direct_test_match)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_dir_tests)
show_doc(lookup_db)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(direct_test_match)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
gen_doc.nbtest Notebook functions to search for api tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 3 different test types: `This tests`, `Direct tests`, and `Related tests`* `This tests` - Searches for function matches in `test_api_db.json`. This json file is populated from `doctest.this_tests` calls.* `Direct tests` - Searches for any test function whose name contains the fastai function call* `Related tests` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(find_tests)
###Output
_____no_output_____
###Markdown
[`find_tests`](/gen_doc.nbtest.htmlfind_tests)
###Code
show_doc(lookup_db)
###Output
_____no_output_____
###Markdown
[`lookup_db`](/gen_doc.nbtest.htmllookup_db)
###Code
show_doc(find_test_matches)
###Output
_____no_output_____
###Markdown
[`find_test_matches`](/gen_doc.nbtest.htmlfind_test_matches)
###Code
show_doc(find_test_files)
###Output
_____no_output_____
###Markdown
[`find_test_files`](/gen_doc.nbtest.htmlfind_test_files)
###Code
show_doc(direct_test_match)
###Output
_____no_output_____
###Markdown
[`direct_test_match`](/gen_doc.nbtest.htmldirect_test_match)
###Code
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____
###Markdown
Functional Test Documentation Generates documentation for fastai's functional tests
###Code
from fastai.gen_doc.nbdoc import *
from fastai.gen_doc.nbtest import *
###Output
_____no_output_____
###Markdown
Find tests for any function/class [`show_test`](/gen_doc.nbtest.htmlshow_test) and [`doctest`](/gen_doc.nbtest.htmldoctest) searches for any implemented tests for a given fastai class or function For test writers: * Use this module to search for tests and get a better idea on which parts of the fastai api need more functional testsFor fastai users: * Usage is similar to [`nbdoc.show_doc`](/gen_doc.nbdoc.htmlshow_doc) and [`nbdoc.doc`](/gen_doc.nbdoc.htmldoc). * It's here to help you find associated tests for a given function can help understand usage. Usage:
###Code
show_doc(show_test)
###Output
_____no_output_____
###Markdown
**Show tests from function**
###Code
from fastai.basic_train import Learner
show_test(Learner.fit)
###Output
_____no_output_____
###Markdown
**Show tests from a Class**
###Code
from fastai.basic_data import DataBunch
show_test(DataBunch)
from fastai.text.data import TextList
show_test(TextList)
###Output
_____no_output_____
###Markdown
Different test types Above, you will see 2 different test types: `Tests found for...` and `Some other tests...`* `Tests found for...` - Searches for function matches in `test_registry.json`. This json file is populated from `doctest.this_tests` calls.* `Some other tests...` - Returns any test function where the fastai function in called inside the body Show in notebook inline:
###Code
show_doc(doctest)
###Output
_____no_output_____
###Markdown
Internal search methods
###Code
show_doc(lookup_db)
show_doc(find_related_tests)
show_doc(find_test_matches)
show_doc(find_test_files)
show_doc(fuzzy_test_match)
###Output
_____no_output_____ |
notebooks/StackInspector.ipynb | ###Markdown
Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter: Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass.
###Code
import bookutils
import inspect
import warnings
from types import FunctionType, FrameType, TracebackType
# ignore
from typing import cast, Dict, Any, Tuple, Callable, Optional, Type
###Output
_____no_output_____
###Markdown
The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses.
###Code
class StackInspector:
"""Provide functions to inspect the stack"""
def caller_frame(self) -> FrameType:
"""Return the frame of the caller."""
# Walk up the call tree until we leave the current class
frame = cast(FrameType, inspect.currentframe())
while self.our_frame(frame):
frame = cast(FrameType, frame.f_back)
return frame
def our_frame(self, frame: FrameType) -> bool:
"""Return true if `frame` is in the current (inspecting) class."""
return isinstance(frame.f_locals.get('self'), self.__class__)
###Output
_____no_output_____
###Markdown
When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`.
###Code
class StackInspector(StackInspector):
def caller_globals(self) -> Dict[str, Any]:
"""Return the globals() environment of the caller."""
return self.caller_frame().f_globals
def caller_locals(self) -> Dict[str, Any]:
"""Return the locals() environment of the caller."""
return self.caller_frame().f_locals
###Output
_____no_output_____
###Markdown
The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future.
###Code
Location = Tuple[Callable, int]
class StackInspector(StackInspector):
def caller_location(self) -> Location:
"""Return the location (func, lineno) of the caller."""
return self.caller_function(), self.caller_frame().f_lineno
###Output
_____no_output_____
###Markdown
The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided.
###Code
class StackInspector(StackInspector):
def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \
Tuple[Optional[FrameType], Optional[Callable]]:
"""
Return a pair (`frame`, `item`)
in which the function `name` is defined as `item`.
"""
if frame is None:
frame = self.caller_frame()
while frame:
item = None
if name in frame.f_globals:
item = frame.f_globals[name]
if name in frame.f_locals:
item = frame.f_locals[name]
if item and callable(item):
return frame, item
frame = cast(FrameType, frame.f_back)
return None, None
def search_func(self, name: str, frame: Optional[FrameType] = None) -> \
Optional[Callable]:
"""Search in callers for a definition of the function `name`"""
frame, func = self.search_frame(name, frame)
return func
###Output
_____no_output_____
###Markdown
If we cannot find a function by name, we can create one, using `create_function()`.
###Code
class StackInspector(StackInspector):
# Avoid generating functions more than once
_generated_function_cache: Dict[Tuple[str, int], Callable] = {}
def create_function(self, frame: FrameType) -> Callable:
"""Create function for given frame"""
name = frame.f_code.co_name
cache_key = (name, frame.f_lineno)
if cache_key in self._generated_function_cache:
return self._generated_function_cache[cache_key]
try:
# Create new function from given code
generated_function = cast(Callable,
FunctionType(frame.f_code,
globals=frame.f_globals,
name=name))
except TypeError:
# Unsuitable code for creating a function
# Last resort: Return some function
generated_function = self.unknown
except Exception as exc:
# Any other exception
warnings.warn(f"Couldn't create function for {name} "
f" ({type(exc).__name__}: {exc})")
generated_function = self.unknown
self._generated_function_cache[cache_key] = generated_function
return generated_function
###Output
_____no_output_____
###Markdown
The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found.
###Code
class StackInspector(StackInspector):
def caller_function(self) -> Callable:
"""Return the calling function"""
frame = self.caller_frame()
name = frame.f_code.co_name
func = self.search_func(name)
if func:
return func
if not name.startswith('<'):
warnings.warn(f"Couldn't find {name} in caller")
return self.create_function(frame)
def unknown(self) -> None: # Placeholder for unknown functions
pass
###Output
_____no_output_____
###Markdown
The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code.
###Code
import traceback
class StackInspector(StackInspector):
def is_internal_error(self, exc_tp: Type,
exc_value: BaseException,
exc_traceback: TracebackType) -> bool:
"""Return True if exception was raised from `StackInspector` or a subclass."""
if not exc_tp:
return False
for frame, lineno in traceback.walk_tb(exc_traceback):
if self.our_frame(frame):
return True
return False
###Output
_____no_output_____
###Markdown
Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.
###Code
class StackInspectorDemo(StackInspector):
def callee(self) -> None:
func = self.caller_function()
assert func.__name__ == 'test'
print(func)
def caller(self) -> None:
self.callee()
def test() -> None:
demo = StackInspectorDemo()
demo.caller()
test()
###Output
_____no_output_____
###Markdown
Here are all methods defined in this chapter:
###Code
# ignore
from ClassDiagram import display_class_hierarchy, class_tree
# ignore
display_class_hierarchy([StackInspector],
abstract_classes=[
StackInspector,
],
public_methods=[
StackInspector.caller_frame,
StackInspector.caller_function,
StackInspector.caller_globals,
StackInspector.caller_locals,
StackInspector.caller_location,
StackInspector.search_frame,
StackInspector.search_func,
StackInspector.is_internal_error,
StackInspector.our_frame,
],
project='debuggingbook')
###Output
_____no_output_____
###Markdown
Inspecting Call StacksIn this book, for many purposes, we need to lookup a function's location, source code, or simply definition. The class `StackInspector` provides a number of convenience methods for this purpose. **Prerequisites*** This is an internal helper class.* Understanding how frames and local variables are represented in Python helps. SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.StackInspector import ```and then make use of the following features.`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.```python>>> class StackInspectorDemo(StackInspector):>>> def callee(self) -> None:>>> func = self.caller_function()>>> assert func.__name__ == 'test'>>> print(func)>>> >>> def caller(self) -> None:>>> self.callee()>>> def test() -> None:>>> demo = StackInspectorDemo()>>> demo.caller()>>> test()```Here are all methods defined in this chapter: Inspecting Call Stacks`StackInspector` is a class that provides a number of utility functions to inspect a [call stack](https://en.wikipedia.org/wiki/Call_stack), notably to identify caller functions. When tracing or instrumenting functions, a common issue is to identify the currently active functions. A typical situation is depicted below, where `my_inspector()` currently traces a function called `function_under_test()`.| Function | Class | || --- | --- | --- || ... | `StackInspector` | || `caller_frame()` | `StackInspector` | invokes $\uparrow$ || `caller_function()` | `StackInspector` | invokes $\uparrow$ || `my_inspector()` | some inspector; a subclass of `StackInspector` | invokes $\uparrow$ || `function_under_test()` | (any) | is traced by $\uparrow$ || -/- | (any) | invokes $\uparrow$ |To determine the calling function, `my_inspector()` could check the current frame and retrieve the frame of the caller. However, this caller could be some tracing function again invoking `my_inspector()`. Therefore, `StackInspector` provides a method `caller_function()` that returns the first caller outside of a `StackInspector` class. This way, a subclass of `StackInspector` can define an arbitrary set of functions (and call stack); `caller_function()` will always return a function outside of the `StackInspector` subclass.
###Code
import bookutils
import inspect
import warnings
from types import FunctionType, FrameType, TracebackType
from typing import cast, Dict, Any, Tuple, Callable, Optional, Type
###Output
_____no_output_____
###Markdown
The method `caller_frame()` walks the current call stack from the current frame towards callers (using the `f_back` attribute of the current frame) and returns the first frame that is _not_ a method or function from the current `StackInspector` class or its subclass. To determine this, the method `our_frame()` determines whether the given execution frame refers to one of the methods of `StackInspector` or one of its subclasses.
###Code
class StackInspector:
"""Provide functions to inspect the stack"""
def caller_frame(self) -> FrameType:
"""Return the frame of the caller."""
# Walk up the call tree until we leave the current class
frame = cast(FrameType, inspect.currentframe())
while self.our_frame(frame):
frame = cast(FrameType, frame.f_back)
return frame
def our_frame(self, frame: FrameType) -> bool:
"""Return true if `frame` is in the current (inspecting) class."""
return isinstance(frame.f_locals.get('self'), self.__class__)
###Output
_____no_output_____
###Markdown
When we access program state or execute functions, we do so in the caller's environment, not ours. The `caller_globals()` method acts as replacement for `globals()`, using `caller_frame()`.
###Code
class StackInspector(StackInspector):
def caller_globals(self) -> Dict[str, Any]:
"""Return the globals() environment of the caller."""
return self.caller_frame().f_globals
def caller_locals(self) -> Dict[str, Any]:
"""Return the locals() environment of the caller."""
return self.caller_frame().f_locals
###Output
_____no_output_____
###Markdown
The method `caller_location()` returns the caller's function and its location. It does a fair bit of magic to retrieve nested functions, by looking through global and local variables until a match is found. This may be simplified in the future.
###Code
Location = Tuple[Callable, int]
class StackInspector(StackInspector):
def caller_location(self) -> Location:
"""Return the location (func, lineno) of the caller."""
return self.caller_function(), self.caller_frame().f_lineno
###Output
_____no_output_____
###Markdown
The function `search_frame()` allows to search for an item named `name`, walking up the call stack. This is handy when trying to find local functions during tracing, for whom typically only the name is provided.
###Code
class StackInspector(StackInspector):
def search_frame(self, name: str, frame: Optional[FrameType] = None) -> \
Tuple[Optional[FrameType], Optional[Callable]]:
"""
Return a pair (`frame`, `item`)
in which the function `name` is defined as `item`.
"""
if frame is None:
frame = self.caller_frame()
while frame:
item = None
if name in frame.f_globals:
item = frame.f_globals[name]
if name in frame.f_locals:
item = frame.f_locals[name]
if item and callable(item):
return frame, item
frame = cast(FrameType, frame.f_back)
return None, None
def search_func(self, name: str, frame: Optional[FrameType] = None) -> \
Optional[Callable]:
"""Search in callers for a definition of the function `name`"""
frame, func = self.search_frame(name, frame)
return func
###Output
_____no_output_____
###Markdown
If we cannot find a function by name, we can create one, using `create_function()`.
###Code
class StackInspector(StackInspector):
# Avoid generating functions more than once
_generated_function_cache: Dict[Tuple[str, int], Callable] = {}
def create_function(self, frame: FrameType) -> Callable:
"""Create function for given frame"""
name = frame.f_code.co_name
cache_key = (name, frame.f_lineno)
if cache_key in self._generated_function_cache:
return self._generated_function_cache[cache_key]
try:
# Create new function from given code
generated_function = cast(Callable,
FunctionType(frame.f_code,
globals=frame.f_globals,
name=name))
except TypeError:
# Unsuitable code for creating a function
# Last resort: Return some function
generated_function = self.unknown
except Exception as exc:
# Any other exception
warnings.warn(f"Couldn't create function for {name} "
f" ({type(exc).__name__}: {exc})")
generated_function = self.unknown
self._generated_function_cache[cache_key] = generated_function
return generated_function
###Output
_____no_output_____
###Markdown
The method `caller_function()` puts all of these together, simply looking up and returning the currently calling function – and creating one if it cannot be found.
###Code
class StackInspector(StackInspector):
def caller_function(self) -> Callable:
"""Return the calling function"""
frame = self.caller_frame()
name = frame.f_code.co_name
func = self.search_func(name)
if func:
return func
if not name.startswith('<'):
warnings.warn(f"Couldn't find {name} in caller")
return self.create_function(frame)
def unknown(self) -> None: # Placeholder for unknown functions
pass
###Output
_____no_output_____
###Markdown
The method `is_internal_error()` allows us to differentiate whether some exception was raised by `StackInspector` (or a subclass) – or whether it was raised by the inspected code.
###Code
import traceback
class StackInspector(StackInspector):
def is_internal_error(self, exc_tp: Type,
exc_value: BaseException,
exc_traceback: TracebackType) -> bool:
"""Return True if exception was raised from `StackInspector` or a subclass."""
if not exc_tp:
return False
for frame, lineno in traceback.walk_tb(exc_traceback):
if self.our_frame(frame):
return True
return False
###Output
_____no_output_____
###Markdown
Synopsis`StackInspector` is typically used as superclass, providing its functionality to subclasses. Here is an example of how to use `caller_function()`. The `test()` function invokes an internal method `caller()` of `StackInspectorDemo`, which in turn invokes `callee()`:| Function | Class | || --- | --- | --- || `callee()` | `StackInspectorDemo` | || `caller()` | `StackInspectorDemo` | invokes $\uparrow$ || `test()` | (main) | invokes $\uparrow$ || -/- | (main) | invokes $\uparrow$ |Using `caller_function()`, `callee()` determines the first caller outside of a `StackInspector` class and prints it out – i.e., ``.
###Code
class StackInspectorDemo(StackInspector):
def callee(self) -> None:
func = self.caller_function()
assert func.__name__ == 'test'
print(func)
def caller(self) -> None:
self.callee()
def test() -> None:
demo = StackInspectorDemo()
demo.caller()
test()
###Output
_____no_output_____
###Markdown
Here are all methods defined in this chapter:
###Code
# ignore
from ClassDiagram import display_class_hierarchy, class_tree
# ignore
display_class_hierarchy([StackInspector],
abstract_classes=[
StackInspector,
],
public_methods=[
StackInspector.caller_frame,
StackInspector.caller_function,
StackInspector.caller_globals,
StackInspector.caller_locals,
StackInspector.caller_location,
StackInspector.search_frame,
StackInspector.search_func,
StackInspector.is_internal_error,
StackInspector.our_frame,
],
project='debuggingbook')
###Output
_____no_output_____ |
notebooks/GettingStarted.ipynb | ###Markdown
Getting Started Contents1. The matmodlab namespace1. Defining a Model 1. Material Model Definition 2. Step Definitions2. Running a Model3. Model Outputs4. Viewing Model Results The matmodlab NamespaceA notebook should include the following statement to import the Matmodlab namespace from matmodlab2 import *
###Code
%pylab inline
from matmodlab2 import *
###Output
Populating the interactive namespace from numpy and matplotlib
Setting up the Matmodlab notebook environment
###Markdown
Defining a ModelThe purpose of a Matmodlab model is to predict the response of a material to deformation. A Matmodlab model requires two parts to be fully defined: - *Material* model: the material type and associated parameters.- Deformation *step[s]*: defines deformation paths through which the material model is exercised.The `MaterialPointSimulator` object manages and allocates memory for materials and analysis steps. Minimally, instantiating a `MaterialPointSimulator` object requires a simulation ID:
###Code
mps = MaterialPointSimulator('jobid')
###Output
_____no_output_____
###Markdown
Other optional arguments to `MaterialPointSimulator` are- `output_format` defines the output format of the simulation results. Valid choices are `REC` [default] and `TXT`.- `d` specifies the directory to which simulation results are written. The default is the current directory.**Note:** by default results *are not* written when exercised from the Notebook. If written results are required, the `MaterialPointSimulator.dump` method must be called explicitly. Material model definitionA material model must be instantiated and assigned to the `MaterialPointSimulator` object. In this example, the `ElasticMaterial` (provided in the `matmodlab` namespace) is used.
###Code
E = 10
Nu = .1
mat = ElasticMaterial(E=E, Nu=Nu)
mps.assign_material(mat)
###Output
_____no_output_____
###Markdown
The `ElasticMaterial` is a linear elastic model implemented in Python. The source code is contained in `matmodlab/materials/elastic.py`. The parameters `E` and `Nu` represent the Young's modulus and Poisson's ratio, respectively. Step DefinitionsDeformation steps define the components of deformation and/or stress to be seen by the material model. Deformation steps are defined by the `MaterialPointSimulator.run_step` method: mps.run_step(descriptors, components)where the argument `descriptors` is a string or list of strings describing each `component` of deformation. Each `descriptor` in `descriptors` must be one of:- `E`: representing strain- `DE`: representing an increment in strain- `S`: representing stress- `DS`: representing an increment in stress- `F`: representing the deformation gradient- `U`: representing displacement`components` is an array containing the components of deformation. The `descriptors` argument instructs the `MaterialPointSimulator` the intent of each `component`. The i$^{\rm th}$ descriptor corresponds to the i$^{\rm th}$ component. For example, ```pythondescriptors = ['E', 'E', 'E', 'S', 'S', 'S']``` declares that the first three components of `components` are to be interpreted as strain (`'E'`) and the last three as stress (`'S'`). Accordingly, `len(components)` must equal `len(descriptors)`. Generally speaking, descriptors must be an iterable object with length equal to the length of `components`. Since string objects are iterable in python, the following representation of `descriptors` is equivalent:```pythondescriptors = 'EEESSS'```The `run_step` method also accepts the following optional arguments:- `increment`: The length of the step in time units, default is 1.- `frames` The number of discrete increments in the step, default is 1- `scale`: Scaling factor to be applied to components. If `scale` is a scalar, it is applied to all `components` equally. If `scale` is a list, `scale[i]` is applied to `components[i]` (and must, therefore, have the same length as `components`)- `kappa`: The Seth-Hill parameter of generalized strain. Default is 0.- `temperature`: The temperature at the end of the step. Default is 0. A note on tensor component orderingComponent ordering for `components` is:1. Symmetric tensors: XX, YY, ZZ, XY, YZ, XZ2. Unsymmetric tensors: XX, XY, XZ YX, YY, YZ ZX, ZY, ZZ3. Vectors: X, Y, Z Example Steps Run a step of uniaxial strain by prescribing all 6 components of the strain tensor.
###Code
ea = .1
mps.run_step('EEEEEE', (ea, 0, 0, 0, 0, 0))
###Output
_____no_output_____
###Markdown
To reverse the step of uniaxial strain defined in the previous cell to a state of zero strain, simply define another step in which all components of strain are zero:
###Code
mps.run_step('EEE', (0, 0, 0))
###Output
_____no_output_____
###Markdown
If `3 `$\leq$ `len(components)<6`, the missing components are assumed to be zero (if `len(components)=1`, it is assumed to be volumetric strain). From elementary linear elasticity, the axial and lateral stresses associated with the step of uniaxial strain are
###Code
G = E / 2. / (1. + Nu)
K = E / 3. / (1. - 2. * Nu)
sa = (K + 4 * G / 3) * ea
sl = (K - 2 * G / 3) * ea
###Output
_____no_output_____
###Markdown
where `K` and `G` are the bulk and shear modulus, respectively. Using a stress defined step, an equivalent deformation path is
###Code
mps.run_step('SSS', (sa, sl, sl), frames=50)
mps.run_step('SSS', (0, 0, 0), frames=50)
###Output
_____no_output_____
###Markdown
The optional `frames` keyword was passed to `run_step` which instructs the `MaterialPointSimulator` object to perform the step in `frames` increments (50 in this case). For stress controlled steps, it is a good idea to increase the number of `frames` since the solution procedure involves a nonlinear Newton solve. Mixed-mode deformations of stress and strain can also be defined. The previous deformation path could have been defined by
###Code
mps.run_step('ESS', (ea, sl, sl), frames=50)
mps.run_step('ESS', (0, 0, 0), frames=50)
###Output
_____no_output_____
###Markdown
The deformation path can be defined equivalently through the specification of stress and strain rate steps:
###Code
mps.run_step(('DE', 'DE', 'DE'), (ea, 0, 0), frames=50)
mps.run_step(('DE', 'DE', 'DE'), (ea, 0, 0), frames=50, scale=-1)
mps.run_step(('DS', 'DS', 'DS'), (sa, sl, sl), frames=50)
mps.run_step(('DS', 'DS', 'DS'), (sa, sl, sl), frames=50, scale=-1)
###Output
_____no_output_____
###Markdown
The keyword `scale` is a scale factor applied to each of the components of `components`. Components of the deformation gradient and displacement can also be prescribed with the `F` and `U` descriptors, respectively. A deformation gradient step requires the nine components of the deformation gradient, arranged in row-major fashion. A displacement step method requires the three components of the displacement.
###Code
fa = exp(ea)
mps.run_step('FFFFFFFFF', (fa,0,0,0,1,0,0,0,1))
mps.run_step('FFFFFFFFF', (1,0,0,0,1,0,0,0,1))
###Output
_____no_output_____
###Markdown
Running the ModelSteps are run as they are added. Model OutputsModel outputs computed by the `MaterialPointSimulator` are stored in a `pandas.DataFrame`:
###Code
mps.df
###Output
_____no_output_____
###Markdown
The output can also be written to a file with the `MaterialPointSimulator.dump` method:
###Code
mps.dump()
###Output
_____no_output_____
###Markdown
The `MaterialPointSimulator.dump` method takes an optional filename. If not given, the `jobid` will be used as the base filename. The file extension must be one of `.npz` for output to be written to a compressed numpy file are `.exo` for output to be written to the `ExodusII` format. Model outputs can be retrieved from the `MaterialPointSimulator` via the `get` method. For example, the components of stress throughout the history of the simulation are:
###Code
s = mps.get('S')
###Output
_____no_output_____
###Markdown
Individual components are also accessed:
###Code
sxx = mps.get('S.XX')
assert (amax(sxx) - sa) / amax(sxx) < 1e-8
###Output
_____no_output_____
###Markdown
Equivalently, the `MaterialPointSimulator.get` method can retrieve components field outputs from the output database: Viewing Model OutputsThe simplest method of viewing model outputs is using the `pandas.DataFrame.plot` method, accessed through `MaterialPointSimulator.df`:
###Code
mps.df.plot('Time', 'E.XX')
###Output
_____no_output_____
###Markdown
Importing scgenFirst we need to import scgen (we use here a relative path)
###Code
import sys
import os
sys.path.append(os.path.abspath(os.path.join('..')))
from scgen.main.DefaultGeneratorFactory import DefaultGeneratorFactory
###Output
_____no_output_____
###Markdown
Specifying a configuration for generationWe create a JSON-string that specifies the parameters of the random generation of the supply chain.
###Code
configuration = \
"""{
"elements": [
{
"type": "suppliers",
"count": 4
},
{
"type": "plants",
"count": 3
}
],
"modules": [
{
"type": "arc",
"fromElements": [ "suppliers" ],
"toElements": [ "plants" ],
"distributions": "default"
},
{
"type": "demand",
"forElements": [ "plants" ],
"distributions": [
{
"dependingOnElements": [ "plants" ],
"type": "uniform",
"min": 10.0,
"max": 10.0
}
]
},
{
"type": "allocation",
"forElements": [ "suppliers", "plants" ],
"distributions": [
{
"dependingOnElements": [ "suppliers" ],
"type": "uniform",
"min": 0.0,
"max": 1.0
}
]
}
]
}"""
###Output
_____no_output_____
###Markdown
Generating a supply chainUsing the above configuration, we now generate the supply chain.
###Code
import json
gen = DefaultGeneratorFactory.getDefaultGenerator()
gen.generate(json.loads(configuration))
output = gen.output().getJson()
print(output)
###Output
{"elements": {"suppliers": [{"suppliers": "supplier_1"}, {"suppliers": "supplier_2"}, {"suppliers": "supplier_3"}, {"suppliers": "supplier_4"}], "plants": [{"plants": "plant_1"}, {"plants": "plant_2"}, {"plants": "plant_3"}]}, "modules": {"arc": [{"suppliers": "supplier_1", "plants": "plant_1", "existing": 1}, {"suppliers": "supplier_1", "plants": "plant_2", "existing": 1}, {"suppliers": "supplier_1", "plants": "plant_3", "existing": 1}, {"suppliers": "supplier_2", "plants": "plant_1", "existing": 1}, {"suppliers": "supplier_2", "plants": "plant_2", "existing": 1}, {"suppliers": "supplier_2", "plants": "plant_3", "existing": 1}, {"suppliers": "supplier_3", "plants": "plant_1", "existing": 1}, {"suppliers": "supplier_3", "plants": "plant_2", "existing": 1}, {"suppliers": "supplier_3", "plants": "plant_3", "existing": 1}, {"suppliers": "supplier_4", "plants": "plant_1", "existing": 1}, {"suppliers": "supplier_4", "plants": "plant_2", "existing": 1}, {"suppliers": "supplier_4", "plants": "plant_3", "existing": 1}], "demand": [{"plants": "plant_1", "demand": 10.0}, {"plants": "plant_2", "demand": 10.0}, {"plants": "plant_3", "demand": 10.0}], "allocation": [{"suppliers": "supplier_1", "plants": "plant_1", "allocation": 2.4203361376}, {"suppliers": "supplier_2", "plants": "plant_1", "allocation": 2.5202311274}, {"suppliers": "supplier_3", "plants": "plant_1", "allocation": 2.0460211701}, {"suppliers": "supplier_4", "plants": "plant_1", "allocation": 3.0134115649}, {"suppliers": "supplier_1", "plants": "plant_2", "allocation": 2.4203361376}, {"suppliers": "supplier_2", "plants": "plant_2", "allocation": 2.5202311274}, {"suppliers": "supplier_3", "plants": "plant_2", "allocation": 2.0460211701}, {"suppliers": "supplier_4", "plants": "plant_2", "allocation": 3.0134115649}, {"suppliers": "supplier_1", "plants": "plant_3", "allocation": 2.4203361376}, {"suppliers": "supplier_2", "plants": "plant_3", "allocation": 2.5202311274}, {"suppliers": "supplier_3", "plants": "plant_3", "allocation": 2.0460211701}, {"suppliers": "supplier_4", "plants": "plant_3", "allocation": 3.0134115649}]}}
###Markdown
Getting Started Setting up the environmentIn order to use `NetTopologySuite` (NTS) you need to install the nuget package.
###Code
#r "nuget:NetTopologySuite, 2.3.0"
###Output
_____no_output_____
###Markdown
To make using NTS a pleasant experience you should set up the environment to according to your needs first.This is done by setting `NtsGeometryServices.Instance` to an instance configured to your needs.
###Code
NetTopologySuite.NtsGeometryServices.Instance = new NetTopologySuite.NtsGeometryServices(
// default CoordinateSequenceFactory
NetTopologySuite.Geometries.Implementation.CoordinateArraySequenceFactory.Instance,
// default precision model
new NetTopologySuite.Geometries.PrecisionModel(1000d),
// default SRID
4326,
/********************************************************************
* Note: the following arguments are only valid for NTS >= v2.2
********************************************************************/
// Geometry overlay operation function set to use (Legacy or NG)
NetTopologySuite.Geometries.GeometryOverlay.NG,
// Coordinate equality comparer to use (CoordinateEqualityComparer or PerOrdinateEqualityComparer)
new NetTopologySuite.Geometries.CoordinateEqualityComparer());
###Output
_____no_output_____
###Markdown
This is the most flexible constructor you can use, there are convenient constructors with less options.**Note:** If you skip this step, a pre-configured instance will be used with the following values:Property | Value--- | ---`DefaultCoordinateSequenceFactory` | `NetTopologySuite.Geometries.Implementation.CoordinateArraySequenceFactory.Instance``DefaultPrecisionModel` | `NetTopologySuite.Geometries.PrecisionModels.Floating``DefaultSRID` | `-1``GeometryOverlay` | `NetTopologySuite.Geometries.GeometryOverlay.Legacy``CoordinateEqualityComparer` | `NetTopologySuite.Geometries.CoordinateEqualityComparer` Creating geometries`NetTopologySuite` provides 7 `Geometry` classes. Geometries are made up of `Coordinate`s which are combined in `CoordinateSequence`s.* `Point` A geometry made up of a single coordinate.* `LineString` A geometry made up of a sequence of successive points. A `LinearRing` is a special case of a closed `LineString`* `Polygon` A geometry made up of a shell (aka exterior ring) and possibly holes (aka InteriorRing). The shell and each hole is a `LinearRing` geometry.* `MultiPoint` A geometry made up of multiple points* `MultiLineString` A geometry made up of multiple linestrings* `MultiPolygon` A geometry made up of multiple polygons* `GeometryCollection` A geometry made up of multiple single-geometry items. While each Geometry class has a public constructor the usage is not encouraged. You should use a `GeometryFactory` instead.You can optain a geometry factory by requesting one from `NetTopologySuite.NtsGeometryServices.Instance`:
###Code
// Get a geometry factor from configured NtsGemetryServices.
// Differing from NtsGeometryServices.DefaultSRID we want one with SRID=4326
var gf = NetTopologySuite.NtsGeometryServices.Instance.CreateGeometryFactory(4326);
###Output
_____no_output_____
###Markdown
You can now use this factory to create e.g. puntal geometries:
###Code
// Create a point at Aurich (lat=53.4837, long=7.5404)
var pntAUR = gf.CreatePoint(new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837));
// Create a point at Emden (lat=53.3646, long=7.1559)
var pntLER = gf.CreatePoint(new NetTopologySuite.Geometries.Coordinate(7.1559, 53.3646));
// Create a point at Leer (lat=53.2476, long=7.4550)
var pntEMD = gf.CreatePoint(new NetTopologySuite.Geometries.Coordinate(7.4550, 53.2476));
System.Console.WriteLine(pntLER.Distance(pntAUR));
###Output
_____no_output_____
###Markdown
To create lineal geometries you need to provide a series of `Coordinate`s.
###Code
// Create a linestring from Aurich to Emden
var lnsAURToEMD = gf.CreateLineString(new [] {
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
new NetTopologySuite.Geometries.Coordinate(7.1559, 53.3646)
});
var lnsAURtoLER = gf.CreateLineString(new[] {
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
new NetTopologySuite.Geometries.Coordinate(7.4550, 53.2476)
});
###Output
_____no_output_____
###Markdown
Polygons can be created by providing a set of shell-ring `Coordinate`s or by providing the ring itself.
###Code
// Create a polygon from Aurich over Emden, Leer to Aurich
var ply1 = gf.CreatePolygon(new[] {
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
new NetTopologySuite.Geometries.Coordinate(7.1559, 53.3646),
new NetTopologySuite.Geometries.Coordinate(7.4550, 53.2476),
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
});
// Alternativly you can build this polygon by building a LinearRing first.
// A LinearRing requires 4 coordinates and 1st and last coordinate must be equal!
var lnr = gf.CreateLinearRing(new[] {
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
new NetTopologySuite.Geometries.Coordinate(7.1559, 53.3646),
new NetTopologySuite.Geometries.Coordinate(7.4550, 53.2476),
new NetTopologySuite.Geometries.Coordinate(7.5404, 53.4837),
});
var ply2 = gf.CreatePolygon(lnr);
###Output
_____no_output_____
###Markdown
There are `MULTI` types for the introduced single instance types.
###Code
// Multi Geometries are build by passing arrays of geometries
var mpnt = gf.CreateMultiPoint(new[] {pntAUR, pntLER, pntEMD});
var mlns = gf.CreateMultiLineString(new[] { lnsAURToEMD, lnsAURtoLER });
var mpoly = gf.CreateMultiPolygon(new[] { ply1 });
###Output
_____no_output_____
###Markdown
Last there is a collection of arbitrary geometry types.
###Code
// A geometry collection
var gc = gf.CreateGeometryCollection(
new NetTopologySuite.Geometries.Geometry[]
{ pntAUR, lnsAURToEMD, pntEMD, ply2, pntLER, lnsAURtoLER });
###Output
_____no_output_____
###Markdown
Instead of using `Coordinate` class you can also use one of its derivates:* `CoordinateZ` for 3D coordinates.* `CoordinateM` for 2D coordinates with an additional measure value.* `CoordinateZM` for 3D coordinates with an additional measure value.Or you can create a `CoordinateSequence` and build the single-instance geometries using that:
###Code
// Create coordinate sequence
var cs = gf.CoordinateSequenceFactory.Create(1, NetTopologySuite.Geometries.Ordinates.XYM);
cs.SetX(0, 7.5404);
cs.SetY(0, 53.4837);
cs.SetM(0, 5432);
var tpAUR2 = gf.CreatePoint(cs);
###Output
_____no_output_____
###Markdown
While all `Geometry` classes are marked with `SerializeableAttribute` using this form of serialization isnot the preferred way of dealing with persistance.Out of the box the [`NetTopologySuite`](https://www.nuget.org/packages/NetTopologySuite/) package providesreader and writer classes for the `Well-known-text` (WKT) and `Well-known-binary` (WKB) format. Well-known-text
###Code
const string wkt = "POINT M(7.5404 53.4837 5432)";
var rdr = new NetTopologySuite.IO.WKTReader();
var ptAUR = rdr.Read(wkt);
var wrt = new NetTopologySuite.IO.WKTWriter();
wrt.OutputOrdinates = NetTopologySuite.Geometries.Ordinates.AllOrdinates;
string wktOut = wrt.Write(ptAUR);
System.Console.WriteLine(wktOut);
// or plainly for strictly 2D geometries!
wktOut = ptAUR.AsText();
wktOut = ptAUR.ToString();
System.Console.WriteLine(wktOut);
###Output
_____no_output_____
###Markdown
Well-known-binary
###Code
byte[] wkb = NetTopologySuite.IO.WKBReader.HexToBytes(
"01B90B00E0E8640000295C8FC2F5281E40CBA145B6F3BD4A40000000000000F8FF000000000038B540");
var rdr = new NetTopologySuite.IO.WKBReader {
HandleOrdinates = NetTopologySuite.Geometries.Ordinates.AllOrdinates,
HandleSRID = true };
var ptAUR = rdr.Read(wkb);
var wrt = new NetTopologySuite.IO.WKBWriter(NetTopologySuite.IO.ByteOrder.LittleEndian,
/* emit SRID */ true, /* emit Z */ true, /* emit M */ true);
byte[] wkbOut = wrt.Write(ptAUR);
System.Console.WriteLine(ptAUR);
System.Console.WriteLine(NetTopologySuite.IO.WKBWriter.ToHex(wkbOut));
###Output
_____no_output_____
###Markdown
Other projects/packagesThere are seperate packages for reading and writing NetTopologySuite's `Geometry` classes:* [`NetTopologySuite.IO.GeoJSON`](https://github.com/NetTopologySuite/NetTopologySuite.IO.GeoJSON) This project actually offers two packages to read and write geometries each using a different support library for serializing JSON. * [`NetTopologySuite.IO.GeoJSON`](https://www.nuget.org/packages/NetTopologySuite.IO.GeoJSON/) using `Newtonsoft.Json` * [`NetTopologySuite.IO.GeoJSON4STJ`](https://www.nuget.org/packages/NetTopologySuite.IO.GeoJSON4STJ/) using `System.Text.Json`* [`NetTopologySuite.IO.SqlServerBytes`](https://github.com/NetTopologySuite/NetTopologySuite.IO.SqlServerBytes)* [`NetTopologySuite.IO.ShapeFile`](https://github.com/NetTopologySuite/NetTopologySuite.IO.ShapeFile)* [`NetTopologySuite.IO.PostGis`](https://github.com/NetTopologySuite/NetTopologySuite.IO.PostGis)* [`NetTopologySuite.IO.TinyWKB`](https://github.com/NetTopologySuite/NetTopologySuite.IO.TinyWKB) Spatial predicatesNetTopologySuite's `Geometry` classes support the following predicates as defined in the _OpenGIS® Implementation Standard for Geographic information - Simple feature access - Part 1: Common architecture_. EqualsEvaluates to `true` if a geometry is _spatially_ equal to another geometry.```C// As defined in SFA-Commonbool equalSfs = geom.EqualsTopologically(otherGeom)// As required for .Net. 'EqualsExact' is called by overload of object.Equals(object obj)bool equalNet = geom.EqualsExact(otherGeom/*, tolerance*/)``` DisjointEvaluates to `true` if a geometry is _spatially_ disjoint to another geometry. This equivalent to negating the return value of an intersection test. ```Cbool disjoint = geom.Disjoint(otherGeom)``` IntersectsEvaluates to `true` if a geometry _spatially_ intersects another geometry.```Cbool intersects = geom.Intersects(otherGeom)``` TouchesEvaluates to `true` if a geometry _spatially_ touches another geometry.```Cbool touches = geom.Touches(otherGeom);``` CrossesEvaluates to `true` if a geometry _spatially_ crosses another geometry.```Cbool crosses = geom.Crosses(otherGeom);``` WithinEvaluates to `true` if a geometry is _spatially_ within another geometry.```Cbool within = geom.Within(otherGeom);``` ContainsEvaluates to `true` if a geometry _spatially_ contains another geometry.```Cbool contains = geom.Contains(otherGeom);``` OverlapsEvaluates to `true` if a geometry _spatially_ overlaps another geometry.```Cbool overlaps = geom.Overlaps(otherGeom);``` RelateEvaluates the relationship between a geometry and another geometry(see [DE-9IM](https://en.wikipedia.org/wiki/DE-9IM)). An overload of this function tests if anassumed intersection matrix correctly describes the relationship.```Cvar im = geom.Relate(otherGeom);bool relate = geom.Relate(otherGeom, im.ToString());```It is worth noting that for 1:M predicate checks there are utility classes in `NetTopologySuite.Geometries.Prepared` namespace.```Cvar prepGeom = NetTopologySuite.Geometries.Prepared.PreparedGeometryFactory.Prepare(geom);foreach (var geomItem in geometries){ // instead of 'Intersects' there are also // the other normal predicates except 'Relate', // plus 'ContainsProperly' if (prepGeom.Intersects(geomItem)) { // do sth. with geomItem }}``` Spatial operationsThe following examples assume we have a `WKTReader` like
###Code
var rdr = new NetTopologySuite.IO.WKTReader();
###Output
_____no_output_____
###Markdown
Intersection
###Code
const string wktPoly1 = "POLYGON ((10 10, 10 30, 30 30, 30 10, 10 10))";
const string wktPoly2 = "POLYGON ((20 20, 20 40, 40 40, 40 20, 20 20))";
var poly1 = rdr.Read(wktPoly1);
var poly2 = rdr.Read(wktPoly2);
// Should be POLYGON ((20 30, 30 30, 30 20, 20 20, 20 30))
var polyInt = poly1.Intersection(poly2);
System.Console.WriteLine(polyInt.AsText())
###Output
_____no_output_____
###Markdown
Difference
###Code
// Should be POLYGON ((10 10, 10 30, 20 30, 20 20, 30 20, 30 10, 10 10))
var polyDiff = poly1.Difference(poly2);
System.Console.WriteLine(polyDiff.AsText());
const string wktLine1 = "LINESTRING (5 15, 15 25, 35 20)";
const string wktLine2 = "LINESTRING (15 25, 35 20, 40 21)";
var ln1 = rdr.Read(wktLine1);
var ln2 = rdr.Read(wktLine2);
// Should be LINESTRING(5 15, 15 25)
var lnDiff = ln1.Difference(ln2);
System.Console.WriteLine(lnDiff.AsText())
###Output
_____no_output_____
###Markdown
SymmetricDifference
###Code
// Should be MULTILINESTRING((5 15, 15 25), (35 20, 40 21))
var lnSymDiff = ln1.SymmetricDifference(ln2);
System.Console.WriteLine(lnSymDiff.AsText())
###Output
_____no_output_____
###Markdown
Union
###Code
// Should be POLYGON ((10 10, 10 30, 20 30, 20 40, 40 40, 40 20, 30 20, 30 10, 10 10))
var polyUnion = poly1.Union(poly2);
System.Console.WriteLine(polyUnion);
// Should be GEOMETRYCOLLECTION (
// LINESTRING (5 15, 10 20),
// POLYGON ((10 10, 10 20, 10 30, 20 30, 20 40, 40 40, 40 21, 40 20, 35 20, 30 20, 30 10, 10 10)))
var allUnion = poly1.Factory.CreateGeometryCollection(
new NetTopologySuite.Geometries.Geometry[]
{
poly1, poly2, ln1, ln2
}).Union();
System.Console.WriteLine(allUnion);
###Output
_____no_output_____
###Markdown
Buffer
###Code
const string wktPoint = "POINT (15 15)";
var pt = rdr.Read(wktPoint);
// Create a buffer around a point with distance of 2d
var ptBuffer = pt.Buffer(2);
System.Console.WriteLine(ptBuffer.AsText())
###Output
_____no_output_____
###Markdown
ConvexHull
###Code
// Should be POLYGON ((10 10, 30 10, 40 20, 40 40, 20 40, 10 30, 10 10))
var ch = polyUnion.ConvexHull();
System.Console.WriteLine(ch.AsText())
###Output
_____no_output_____
###Markdown
PointOnSurface
###Code
// Should be POINT (25 25)
var pos = polyUnion.PointOnSurface;
System.Console.WriteLine(pos.AsText())
###Output
_____no_output_____
###Markdown
In this notebook, we will explore learn about the WhyLogs Python library and the resulting profile summaries. Getting Started with WhyLogs Profile SummariesWe will first read in raw data into Pandas from file and explore that data briefly. To run WhyLogs, we will then import the WhyLogs library, initialize a logging session with WhyLogs, and create a profile that data -- resulting in a WhyLogs profile summary. Finally, we'll explore some of the features of the profile summary content.First, we will import a few standard data science Python libraries.
###Code
import datetime
import os.path
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
WhyLogs allows you to characterize and store key characteristics of a growing dataset efficiently. In machine learning, datasets often consist of both input features and outputs of the model. In deployed systems, you often have a relatively static training dataset as well as a growing dataset from model input and output at inference time. Downloading and exploring the raw Lending Club dataIn our case, we will download and explore a sample from the Lending Club dataset before logging a WhyLogs profile summary. Lending Club is a peer-to-peer lending and alternative investing website on which members may apply for personal loans and invest in personal loans to other Lending Club members. The company published a dataset with information starting in 2013(?). This particular dataset contains only the accepted loans. Our example input data is located at `../example-input/lending_club_demo.csv`. You may use the Juypyter command `!` in front of cell contents to execute a Bash command like `cd` to navigate if necessary.
###Code
data_file = "../example-input/lending_club_demo.csv"
###Output
_____no_output_____
###Markdown
Let's read in that data file into a Pandas dataframe and look at the entries for *January 2017*.Each row refers to a particular loan instance while each column refers to a variable in our dataset.
###Code
full_data = pd.read_csv(os.path.join(data_file))
full_data['issue_d'].describe()
data = full_data[full_data['issue_d'] == 'Jan-2017']
data
###Output
/Users/bernease/miniconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3063: DtypeWarning: Columns (18,117) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Noteworthy Lending Club dataset variables**`loan_status` (categorical, string)**:> current status of the Lending Club loan**`annual_inc` (numeric)**:> the self-reported annual income provided by the borrower during registration**`dti` (numeric)**:> ratio calculated using the borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income**`issue_d` (timestamp, string)**:> the month (and year) which the loan was funded -- useful for backfilling data Running WhyLogs for logging a single datasetLet's now explore import a function from Why Labs that allows us to create a logging session.This session can be connected with multiple writers that output the results of our profiling locally in JSON, a flat CSV, or binary protobuf format as well as writers to an AWS S3 bucket in the cloud. Further writing functionality will be added as well.Let's create a default session below.
###Code
from whylabs.logs import get_or_create_session
session = get_or_create_session()
logger = session.logger()
session.log_dataframe(data.head(100), 'demo')
###Output
_____no_output_____
###Markdown
Now that we've logged our dataset, we can see the output of the WhyLogs profiling process in created directory. Inside of our original directory, the WhyLogs logger creates an `output` directory containing folders for our named dataset `demo` and the associated timestamp. Inside, we see the what has been logged.
###Code
print("Current working directory:", os.getcwd())
print("Directory contents:\n", os.listdir())
###Output
Current working directory: /Users/bernease/repos/cli-demo-1/example-notebooks/output/demo/1595958374606
Directory contents:
['protobuf.bin', 'summary_summary.csv', 'summary_strings.json', 'summary_histogram.json', 'whylogs.json']
###Markdown
Inside of that directory, we see a number of files:* `whylogs.json`* `summary_summary.csv`* `summary_histogram.json`* `summary_strings.json`* `protobuf.bin`We could read these files into Pandas using the `pd.read_csv` and `pd.read_json` functions to operate explore these profile summaries. When done with your logging session, you may close it using `session.close()`. Typically, this task would be saved until the end but you may do so now.
###Code
session.close()
###Output
_____no_output_____
###Markdown
WhyLogs also provides a static `dataframe_profile` function that returns a DatasetProfile object when passed in a Pandas dataframe with our raw data. This particular function does not require an active session to be running.Because the remainder of the notebook uses this functionality instead of the typical writing logs to disk or S3, we can close the session now.
###Code
from whylabs.logs.core.datasetprofile import dataframe_profile
profile = dataframe_profile(data, 'demo', timestamp=datetime.datetime.strptime('Oct-2016', '%b-%Y'))
profile
###Output
_____no_output_____
###Markdown
This DatasetProfile object, stored in the `profile` variable, can now be referenced from Python.This object contains helpful information about the profile, such as the session ID, the dates associated with both the data and session, and user-specified metadata and tags. First, let's transform the dataset profile into the flat summary form. Unlike the binary `protobuf.bin` file and the hierarchical `whylogs.json` file that was written using the logger, the summary format makes it much easier to analyze and run data science processes on the data. This structure is much more flat, a table format or a single depth dictionary format organized by variable.These less hierarchical formats were also created with the `log_dataframe` functionality and can be found in the `summary_summary.csv`, `summary_histogram.json` and `summary_strings.json` files.
###Code
summaries = profile.flat_summary()
###Output
_____no_output_____
###Markdown
Let's first look at the overall summary for the profiled dataset.
###Code
summary = summaries['summary']
summary
###Output
_____no_output_____
###Markdown
We can see that this summary object is much smaller at **151 rows x 32 columns** than the original dataset at **1000 rows x 151 columns**. Smaller storage sizes are important in reducing costs and making it easier for your data scientists to complete monitoring and post-analysis on large amounts of data. Each row of our flat profile summary contains under column the name of the variable found in the dataset.We can also see a number of useful metrics as columns in our summary: descriptive statistics, type information, unique estimates and bounds, as well as specially formulated metrics like inferred_dtype and dtype_fraction. Let's explore the output of WhyLogs for a few of the variables we mentioned earlier. For example, let's look at the `funded_amnt` variable.
###Code
summary[summary['column']=='funded_amnt'].T
###Output
_____no_output_____
###Markdown
You may notice that the count for this variable was recorded at **309** counted with a minimum loan amount of **\$1,000.00 USD** and a maximum loan amount of **\$40,000.00 USD**.For numerical variables like `funded_amnt`, we can view further information in the histograms dictionary from the profile summaries object. The variable's histogram object contains bin edges along with counts.
###Code
histograms = summaries['hist']
print(histograms['funded_amnt'])
###Output
{'bin_edges': [1000.0, 2300.0001333333335, 3600.000266666667, 4900.000400000001, 6200.000533333334, 7500.000666666667, 8800.000800000002, 10100.000933333335, 11400.001066666668, 12700.0012, 14000.001333333334, 15300.001466666668, 16600.001600000003, 17900.001733333334, 19200.00186666667, 20500.002, 21800.002133333335, 23100.00226666667, 24400.0024, 25700.002533333336, 27000.002666666667, 28300.002800000002, 29600.002933333337, 30900.003066666668, 32200.003200000003, 33500.00333333334, 34800.00346666667, 36100.003600000004, 37400.00373333334, 38700.00386666667, 40000.004], 'counts': [7, 12, 11, 34, 14, 19, 32, 8, 24, 9, 22, 14, 9, 9, 24, 7, 3, 5, 8, 2, 5, 3, 5, 3, 2, 0, 15, 0, 0, 3]}
###Markdown
For another variable, `loan_status` we will see interesting information in different metrics. This is because loan status is a categorical field that takes strings as inputs.Let's look at a few relevant metrics for this and other string variables. Let's look at a few relevant metrics for string variables.
###Code
summary[summary['column']=='loan_status'][['type_string_count', 'type_null_count', 'nunique_str', 'nunique_str_lower', 'ununique_str_upper']]
###Output
_____no_output_____
###Markdown
Notice that there are **309** elements of string type. Also, the unique string fields show **6** unique strings. The lower and upper bounds for the estimate are also **6**, meaning that this is an exact number. You will see many instances of this -- DataSketches in WhyLogs finds exact estimates for numbers as high as 400 unique values.Let's now explore the frequent strings object from our profile summaries.
###Code
frequent_strings = summaries['frequent_strings']
print(frequent_strings['loan_status'])
###Output
{'Current': 239, 'Fully Paid': 54, 'Charged Off': 7, 'Late (31-120 days)': 5, 'In Grace Period': 3, 'Late (16-30 days)': 1}
###Markdown
Visualizing multiple datasets across time with WhyLogs To use the WhyLogs visualization tools, let us import the `ProfileVisualizer` object and use the Altair visualization framework for now.
###Code
from whylabs.logs.viz import ProfileVisualizer
viz = ProfileVisualizer()
viz = viz.enable_framework('altair')
###Output
_____no_output_____
###Markdown
Now that we've explored data for a single month, let's calculate profile summaries for a series of months. Normally, we'd expect WhyLogs to be operating on future data, so these new datasets originate from data seen at inference time.But in special cases like this demo or diagnosing data collected prior to WhyLogs integration, it may be helpful to backfill with past data. Here we'll manually create a list of profile summaries, but this will soon be made even more simple in WhyLogs.
###Code
import datetime
# Create a list of data profiles
remaining_dates = ['Feb-2017', 'Mar-2017', 'Apr-2017', 'May-2017', 'Jun-2017']
profiles = [profile] # list with original profile
for date in remaining_dates:
timestamp = datetime.datetime.strptime(date, '%b-%Y')
subset_data = full_data[full_data['issue_d']==date]
subset_profile = dataframe_profile(subset_data, timestamp=timestamp)
profiles.append(subset_profile)
profiles
###Output
_____no_output_____
###Markdown
Let's pass this list of profiles into the visualizer.
###Code
viz.set_profiles(profiles)
###Output
_____no_output_____
###Markdown
We can now quickly look at temporal series visualizations for our profiles.
###Code
viz.plot_schema_series("loan_status")
viz.plot_null_series("dti")
viz.plot_uniqueness_series("dti")
viz.plot_distribution_series('loan_status', type='discrete')
###Output
_____no_output_____
###Markdown
[](http://colab.research.google.com/github/asteroid-team/asteroid-filterbanks/blob/master/notebooks/GettingStarted.ipynb) Filterbank APIAll supported filterbanks and related functions and classes can be found in [`asteroid-filterbanks`](https://github.com/asteroid-team/asteroid-filterbanks/tree/master/asteroid_filterbanks). The main classes are `Filterbank`, `Encoder` and `Decoder`.- `Filterbank` is the class holding the properties of the filterbank you want to use.- `Encoder` and `Decoder` are wrappers around filterbanks that enhance them with useful methods and attributes.The most common is the filterbank with fully learnable filters : `FreeFB`. Wrapping it by an `Encoder` makes it similar to a `nn.Conv1d`, wrapping it by a `Decoder` makes it similar to a `nn.ConvTranspose1d`.
###Code
# First install asteroid and depencies
!pip install git+https://github.com/asteroid-team/asteroid-filterbanks.git@master --quiet
%matplotlib inline
###Output
_____no_output_____
###Markdown
Simple example
###Code
import torch
from asteroid_filterbanks.enc_dec import Filterbank, Encoder, Decoder
from asteroid_filterbanks import FreeFB
import matplotlib.pyplot as plt
# First, instantiate a filterbank
fb = FreeFB(n_filters=256, kernel_size=128, stride=64)
# Make an encoder out of it, forward some waveform through it.
encoder = Encoder(fb)
# Same for decoder (the filterbank doesn't need to be the same)
decoder_fb = FreeFB(n_filters=256, kernel_size=128, stride=64)
decoder = Decoder(decoder_fb)
waveform = torch.randn(1, 1, 32000) # (batch, channel, wav_lenght)
# This would be the output of an adaptative/learnable front-end like in TasNet.
spec_like = encoder(waveform)
# Do whatever you want with it
modif_spec_like = (spec_like.pow(2) + 1).log()
# Go back in the time domain with the decoder.
out_waveform = decoder(modif_spec_like)
###Output
_____no_output_____
###Markdown
Short-time Fourier Transform (STFT)`asteroid` provide a filterbank for STFT (`STFTFB`) which yields STFT when used as an encoder and iSTFT as a decoder.
###Code
from asteroid_filterbanks import STFTFB
from asteroid_filterbanks.transforms import take_mag
# By default, the filters are weighted by a square root hanning window.
dft_filters = STFTFB(n_filters=512, kernel_size=256, stride=128)
stft = Encoder(dft_filters)
idft_filters = STFTFB(n_filters=512, kernel_size=256, stride=128)
istft = Decoder(idft_filters)
spec = stft(waveform)
out_waveform = istft(spec)
# We can plot to see the output is the same as the input.
fig, axes = plt.subplots(figsize=(20, 5))
axes.plot(waveform.squeeze().data.numpy(), 'b')
axes.plot(out_waveform.squeeze().data.numpy(), 'r+')
plt.show()
###Output
_____no_output_____
###Markdown
Dynamic pseudo-inverseAn `Decoder` can be a dynamic pseudo-inverse of an `Encoder` thanks to the `pinv_of` class method. The other way around also works.Let's see and example of how simple this is and make some plots.
###Code
# Same as before, define a filterbank for an encoder.
fb = FreeFB(n_filters=256, kernel_size=128, stride=64)
encoder = Encoder(fb)
# Define the pseudo inverse decoder.
decoder = Decoder.pinv_of(encoder)
waveform = torch.randn(1, 1, 32000) # (batch, channel, wav_lenght)
spec_like = encoder(waveform)
out_waveform = decoder(spec_like)
fig, axes = plt.subplots(figsize=(20, 5))
axes.plot(waveform.squeeze().data.numpy(), 'b')
axes.plot(out_waveform.squeeze().data.numpy(), 'r+')
plt.show()
###Output
_____no_output_____
###Markdown
Making an encoder-decoder pair in one line [`make_enc_dec`](https://github.com/mpariente/asteroid/blob/master/asteroid/filterbanks/__init__.pyL12) is a small function that helps making encoder-decoder pairs in an efficient way. Let's see some examples :
###Code
from asteroid_filterbanks import make_enc_dec
# Create adaptative encoder-decoder pair like in Tasnet
enc, dec = make_enc_dec('free', n_filters=500, kernel_size=80)
# Create STFT/iSTFT pair in one line
stft, istft = make_enc_dec('stft', n_filters=512, kernel_size=256, stride=128)
# Create an analytic encoder and a pseudo-inverse decoder.
analytic_enc, pinv_dec = make_enc_dec('analytic_free', who_is_pinv='dec', n_filters=500, kernel_size=16, stride=8)
## Writing you own filterbank
You need to define a subclass of `Filterbank` and overwrite the `filters` property.
Your filters can either be static or dynamic :
- Static filters are fixed at runtime (they can still be updated between forwards using gradient descent). This
is the case of standard 1D-convolution or DFT filters for example.
- Dynamic filters depend on some internal variables and are computed on the go, for every forward.
###Output
_____no_output_____ |
tutorials/W2D5_ReinforcementLearning/W2D5_Tutorial1.ipynb | ###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV13f4y1d7om', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV13f4y1d7om
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in a recursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the model to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an environment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely).Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipulating its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves.This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/YoNbc9M92YY
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = reward + gamma * V[next_state] - V[state]
# Write an expression to update the value function
# V[state] += alpha * TDE[state, n]
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
# np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 10000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
# V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=0.001)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/YoNbc9M92YY
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.0001, max=0.1, step=0.0001, readout_format='.4f', description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/YoNbc9M92YY
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - For each exercise, we use a different CS-US contingency. - The agent's goal is to learn to predict expected rewards from each state in the trial. __Definitions:__1. Returns: \begin{align}G_{t} = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + ... = \sum \limits_{k = 1}^{\infty} \gamma^{k-1} r_{t+k}\end{align}2. Value: \begin{align}V(s_{t}) = \mathbb{E} [ G_{t} | s_{t}] = \mathbb{E} [r_{t+1} + \gamma V_{t+1} | s_{t}] \end{align}3. TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}4. Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the CS. Save TD-errors over learning so we can visualize them -- you're going to need to compute them anyway. Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses multiple rewards. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted rewards of 6 or 14 units (both equally likely). Can you find another pair of rewards, both equally likely, that exactly match this value function? Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. The takehope less This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: TD-reset In this exercise we will implement a commonly used heuristic used in modeling activity of dopamine neurons, TD-reset. Implement TD-learning as in previous exercises, but set TD-error to zero on all steps after reward (US). 1. Plot value function and TD-errors. 2. Can you explain how the reset is changing the TD-errors and value function?
###Code
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with the TD-reset update rule
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD learning with the TD-reset update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD learning with the TD-reset update rule")
########################################################################
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = ...
else:
TDE_reset[state] = ...
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE_reset
# Uncomment these two lines to visualize your results
# env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
# p_reward=0.8)
# V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
# learning_summary_plot(V_reset, TDE_reset)
#to_remove solution
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with reset
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = 0
else:
TDE_reset[state] = reward + gamma * V[next_state] - V[state]
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += alpha * TDE_reset[state,n] * is_delay
# Update state
state = next_state
return V, TDE_reset
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V_reset, TDE_reset)
###Output
_____no_output_____
###Markdown
Exercise 3: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy 2020 -- RL Day Tutorial 1 - Learning to predictPlease execute the cell below to initialize the notebook environment
###Code
#@ title Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets # import widgets
# @title Figure Settings
%matplotlib inline
fig_w, fig_h = (8, 8)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
%config InlineBackend.figure_format = 'retina'
# @title Helper functions
# Parameters for simulations
reward_time = 10
reward_magnitude = 10
n_trials = 20000
n_steps = 40
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.set_xticklabels([f"{int(skip * x)}" for x in ax.get_xticks()])
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...."
if not (((r1+r2) & 5) & 7) - 4: # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how RPEs behave in classical conditioning and what we should expect if Dopamine represents a "canonical" model-free RPE. * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs* Advanced exercise--- __Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to Start 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - For each exercise, we use a different CS-US contingency. - The agent's goal is to learn to predict expected rewards from each state in the trial. __Definitions:__1. Returns: \begin{align}G_{t} = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + ... = \sum \limits_{k = 1}^{\infty} \gamma^{k-1} r_{t+k}\end{align}2. Value: \begin{align}V(s_{t}) = \mathbb{E} [ G_{t} | s_{t}] = \mathbb{E} [r_{t+1} + \gamma V_{t+1} | s_{t}] \end{align}3. TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}4. Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} **Run the following code for your implementation:** --- EXERCISE 1: TD-learning with guaranteed rewards. Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the CS. Save TD-errors over learning so we can visualize them--you're going to need to compute them anyway. Use the provided code to estimate the value function.
###Code
env = ClassicalConditioning(n_steps, reward_magnitude=reward_magnitude, reward_time=reward_time) # Initialize the environment with n_steps; CS is presented ~1/4 along the way (e.g. for n_steps=10, CS is presented at t=9)
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state) # Get next state and next reward
is_delay = env.state_dict[state][0] # Is the current state in the delay period (after CS)?
########################################################################
## Insert code to calculate the TD-error
# Hint: this is the reward prediction *error*. When would we expect
# Comment out the last line in this block when you're done.
raise NotImplementedError("Student excercise: need to implement RPE")
#################################################################################
########################################################################
## Insert code to update the Value Function V(st) below.
# Comment out the last line in this block when you're done.
# Hint: use the TD error you calculated above.
raise NotImplementedError("Student excercise: need to implement Value update")
#################################################################################
state = next_state # Update state
return V, TDE
#V, TDE = td_learning_guaranteed_reward()
#learning_summary_plot(V, TDE)
#to_remove solution
env = ClassicalConditioning(n_steps, reward_magnitude, reward_time) # Initialize the environment with n_steps; CS is presented ~1/4 along the way (e.g. for n_steps=10, CS is presented at t=9)
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(n_steps):
next_state, reward = env.get_outcome(state) # Get next state and next reward
is_delay = env.state_dict[state][0] # Is the current state in the delay period (after CS)?
########################################################################
# Insert code to calculate the TD-error.
# Hint: This is the reward prediction *error*. When would we expect to see a reward?
# Comment out the last line in this block when you're done.
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# raise NotImplementedError("Student excercise: need to implement RPE")
#################################################################################
########################################################################
## Insert code to update the Value Function V(st) below.
# Comment out the last line in this block when you're done.
# Hint: use the TD error you calculated above.
V[state] += alpha * TDE[state, n] * is_delay
# raise NotImplementedError("Student excercise: need to implement Value update")
#################################################################################
state = next_state # Update state
return V, TDE
V, TDE = td_learner(env, 20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
findfont: Font family ['xkcd', 'xkcd Script', 'Humor Sans', 'Comic Neue', 'Comic Sans MS'] not found. Falling back to DejaVu Sans.
###Markdown
Interactive Demo 1: US to CS Transfer ---During classical conditioning, the subject's behavioral response (e.g., salivating) transfer from the unconditioned stimulus (US) (e.g., the smell of tasty food) to the conditioned stimulus (CS) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (red line), the only the reward state has high reward prediction error. As training progresses (blue, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='r-', markerfmt='rd', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='g-', markerfmt='gs', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='b-', markerfmt='bo', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Interactive Demo 2 : Learning Rates and Discount Factors---Our TD-learning agent has two parameter that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?
###Code
#@title
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps, reward_magnitude, reward_time)
try:
V_params, TDE_params = td_learner(env, n_trials, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove solution
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
EXERCISE 2: TD-learning with varying reward magnitudes In the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. First, will replace the environment with one that dispenses multiple rewards. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted rewards of 6 or 14 units (both equally likely). Can you find another pair of rewards, both equally likely, that exactly match this value function? Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title Interactive Demo 3: Match the Value Functions
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, description="Reward 1"),
r2 = widgets.IntText(value=0, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
__ Examining the TD Error---Run the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove solution
"""
The TD trace now takes on negative values because the reward delivered is sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- EXERCISE 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
try:
env = ProbabilisticCC(n_steps, reward_magnitude=10, reward_time=10, p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic,TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove solution
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probablalistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
ADVANCED Exercise: TD-reset In this exercise we will implement a commonly used heuristic used in modeling activity of dopamine neurons, TD-reset. Implement TD-learning as in previous exercises, but set TD-error to zero on all steps after reward (US). 1. Plot value function and TD-errors. 2. Can you explain how the reset is changing the TD-errors and value function?
###Code
env = ProbabilisticCC(n_steps, reward_magnitude=10, reward_time=10, p_reward=0.8)
def td_reset_learner(env, n_trials, alpha=0.001, gamma=0.98):
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
########################################################################
# Insert your code here to convert your TD learner into TD-reset
# Hint: you can reuse much of your previous work.
#TDE_reset[state] =....
#
#
#
#V[state] += ....
# Comment out the line below when you are done
raise NotImplementedError("Student excercise: need to implement TD-reset update")
########################################################################
state = next_state
return V, TDE
# Uncomment these two lines to visualize your results
#V_reset, TDE_reset = td_reset_learner(env, n_trials)
#learning_summary_plot(V_reset, TDE_reset)
#to_remove solution
env = ProbabilisticCC(n_steps, reward_magnitude=10, reward_time=10, p_reward=0.8)
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
########################################################################
# Insert your code here to convert your TD learner into TD-reset
# Hint: you can reuse much of your previous work.
TDE_reset[state] = reward + gamma * V[next_state] - V[state]
if reset == True:
TDE[state] = 0
if reward:
reset = True
V[state] += alpha * TDE_reset[state,n] * is_delay
#raise NotImplementedError("Student excercise: need to implement Value update")
########################################################################
state = next_state
return V, TDE_reset
V_reset, TDE_reset = td_reset_learner(env, n_trials)
learning_summary_plot(V_reset, TDE_reset)
###Output
_____no_output_____
###Markdown
Advanced: Removing the CS---In Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove solution
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22--the animal
just wants rewards;it doesn't care about /your/ experiment!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV13f4y1d7om', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV13f4y1d7om
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - For each exercise, we use a different CS-US contingency. - The agent's goal is to learn to predict expected rewards from each state in the trial. __Definitions:__1. Returns: \begin{align}G_{t} = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + ... = \sum \limits_{k = 1}^{\infty} \gamma^{k-1} r_{t+k}\end{align}2. Value: \begin{align}V(s_{t}) = \mathbb{E} [ G_{t} | s_{t}] = \mathbb{E} [r_{t+1} + \gamma V_{t+1} | s_{t}] \end{align}3. TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}4. Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the CS. Save TD-errors over learning so we can visualize them -- you're going to need to compute them anyway. Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = ...
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
# env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
# V, TDE = td_learner(env, n_trials=20000)
# learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses multiple rewards. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted rewards of 6 or 14 units (both equally likely). Can you find another pair of rewards, both equally likely, that exactly match this value function? Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. The takehope less This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: TD-reset In this exercise we will implement a commonly used heuristic used in modeling activity of dopamine neurons, TD-reset. Implement TD-learning as in previous exercises, but set TD-error to zero on all steps after reward (US). 1. Plot value function and TD-errors. 2. Can you explain how the reset is changing the TD-errors and value function?
###Code
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with the TD-reset update rule
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD learning with the TD-reset update rule
# Fill out function and remove
raise NotImplementedError("Student excercise: implement TD learning with the TD-reset update rule")
########################################################################
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = ...
else:
TDE_reset[state] = ...
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += ...
# Update state
state = next_state
return V, TDE_reset
# Uncomment these two lines to visualize your results
# env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
# p_reward=0.8)
# V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
# learning_summary_plot(V_reset, TDE_reset)
#to_remove solution
def td_reset_learner(env, n_trials, alpha=0.25, gamma=0.98):
""" Temporal Difference learning with reset
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps)
TDE_reset = np.zeros((env.n_steps, n_trials))
for n in range(n_trials):
state = 0
reset = False
for t in range(env.n_steps):
next_state, reward = env.get_outcome(state)
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error using the TD-reset rule
if reset:
TDE_reset[state] = 0
else:
TDE_reset[state] = reward + gamma * V[next_state] - V[state]
# Set reset flag if we receive a reward > 0
if reward > 0:
reset = True
# Write an expression to update the value function
V[state] += alpha * TDE_reset[state,n] * is_delay
# Update state
state = next_state
return V, TDE_reset
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_reset, TDE_reset = td_reset_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V_reset, TDE_reset)
###Output
_____no_output_____
###Markdown
Exercise 3: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 2, Day 5, Tutorial 1 Learning to Predict__Content creators:__ Marcelo Mattar and Eric DeWitt with help from Matt Krause__Content reviewers:__ Byron Galbraith and Michael Waskom --- Tutorial objectives In this tutorial, we will learn how to estimate state-value functions in a classical conditioning paradigm using Temporal Difference (TD) learning and examine TD-errors at the presentation of the conditioned and unconditioned stimulus (CS and US) under different CS-US contingencies. These exercises will provide you with an understanding of both how reward prediction errors (RPEs) behave in classical conditioning and what we should expect to see if Dopamine represents a "canonical" model-free RPE. At the end of this tutorial: * You will learn to use the standard tapped delay line conditioning model* You will understand how RPEs move to CS* You will understand how variability in reward size effects RPEs* You will understand how differences in US-CS timing effect RPEs
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
from matplotlib import ticker
def plot_value_function(V, ax=None, show=True):
"""Plot V(s), the value function"""
if not ax:
fig, ax = plt.subplots()
ax.stem(V, use_line_collection=True)
ax.set_ylabel('Value')
ax.set_xlabel('State')
ax.set_title("Value function: $V(s)$")
if show:
plt.show()
def plot_tde_trace(TDE, ax=None, show=True, skip=400):
"""Plot the TD Error across trials"""
if not ax:
fig, ax = plt.subplots()
indx = np.arange(0, TDE.shape[1], skip)
im = ax.imshow(TDE[:,indx])
positions = ax.get_xticks()
# Avoid warning when setting string tick labels
ax.xaxis.set_major_locator(ticker.FixedLocator(positions))
ax.set_xticklabels([f"{int(skip * x)}" for x in positions])
ax.set_title('TD-error over learning')
ax.set_ylabel('State')
ax.set_xlabel('Iterations')
ax.figure.colorbar(im)
if show:
plt.show()
def learning_summary_plot(V, TDE):
"""Summary plot for Ex1"""
fig, (ax1, ax2) = plt.subplots(nrows = 2, gridspec_kw={'height_ratios': [1, 2]})
plot_value_function(V, ax=ax1, show=False)
plot_tde_trace(TDE, ax=ax2, show=False)
plt.tight_layout()
def reward_guesser_title_hint(r1, r2):
""""Provide a mildly obfuscated hint for a demo."""
if (r1==14 and r2==6) or (r1==6 and r2==14):
return "Technically correct...(the best kind of correct)"
if ~(~(r1+r2) ^ 11) - 1 == (6 | 24): # Don't spoil the fun :-)
return "Congratulations! You solved it!"
return "Keep trying...."
#@title Default title text
class ClassicalConditioning:
def __init__(self, n_steps, reward_magnitude, reward_time):
# Task variables
self.n_steps = n_steps
self.n_actions = 0
self.cs_time = int(n_steps/4) - 1
# Reward variables
self.reward_state = [0,0]
self.reward_magnitude = None
self.reward_probability = None
self.reward_time = None
self.set_reward(reward_magnitude, reward_time)
# Time step at which the conditioned stimulus is presented
# Create a state dictionary
self._create_state_dictionary()
def set_reward(self, reward_magnitude, reward_time):
"""
Determine reward state and magnitude of reward
"""
if reward_time >= self.n_steps - self.cs_time:
self.reward_magnitude = 0
else:
self.reward_magnitude = reward_magnitude
self.reward_state = [1, reward_time]
def get_outcome(self, current_state):
"""
Determine next state and reward
"""
# Update state
if current_state < self.n_steps - 1:
next_state = current_state + 1
else:
next_state = 0
# Check for reward
if self.reward_state == self.state_dict[current_state]:
reward = self.reward_magnitude
else:
reward = 0
return next_state, reward
def _create_state_dictionary(self):
"""
This dictionary maps number of time steps/ state identities
in each episode to some useful state attributes:
state - 0 1 2 3 4 5 (cs) 6 7 8 9 10 11 12 ...
is_delay - 0 0 0 0 0 0 (cs) 1 1 1 1 1 1 1 ...
t_in_delay - 0 0 0 0 0 0 (cs) 1 2 3 4 5 6 7 ...
"""
d = 0
self.state_dict = {}
for s in range(self.n_steps):
if s <= self.cs_time:
self.state_dict[s] = [0,0]
else:
d += 1 # Time in delay
self.state_dict[s] = [1,d]
class MultiRewardCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that one randomly selected reward,
magnitude, from a list, is delivered of a single fixed reward."""
def __init__(self, n_steps, reward_magnitudes, reward_time=None):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: LIST of possible reward magnitudes.
- reward_time: Single fixed reward time
Uses numpy global random state.
"""
super().__init__(n_steps, 1, reward_time)
self.reward_magnitudes = reward_magnitudes
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward=np.random.choice(self.reward_magnitudes)
return next_state, reward
class ProbabilisticCC(ClassicalConditioning):
"""Classical conditioning paradigm, except that rewards are stochastically omitted."""
def __init__(self, n_steps, reward_magnitude, reward_time=None, p_reward=0.75):
""""Build a multi-reward classical conditioning environment
Args:
- nsteps: Maximum number of steps
- reward_magnitudes: Reward magnitudes.
- reward_time: Single fixed reward time.
- p_reward: probability that reward is actually delivered in rewarding state
Uses numpy global random state.
"""
super().__init__(n_steps, reward_magnitude, reward_time)
self.p_reward = p_reward
def get_outcome(self, current_state):
next_state, reward = super().get_outcome(current_state)
if reward:
reward*= int(np.random.uniform(size=1)[0] < self.p_reward)
return next_state, reward
###Output
_____no_output_____
###Markdown
--- Section 1: TD-learning
###Code
#@title Video 1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="YoNbc9M92YY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/YoNbc9M92YY
###Markdown
__Environment:__- The agent experiences the environment in episodes or trials. - Episodes terminate by transitioning to the inter-trial-interval (ITI) state and they are initiated from the ITI state as well. We clamp the value of the terminal/ITI states to zero. - The classical conditioning environment is composed of a sequence of states that the agent deterministically transitions through. Starting at State 0, the agent moves to State 1 in the first step, from State 1 to State 2 in the second, and so on. These states represent time in the tapped delay line representation- Within each episode, the agent is presented a CS and US (reward). - The CS is always presented at 1/4 of the total duration of the trial. The US (reward) is then delivered after the CS. The interval between the CS and US is specified by `reward_time`.- The agent's goal is to learn to predict expected rewards from each state in the trial. **General concepts*** Return $G_{t}$: future cumulative reward, which can be written in arecursive form\begin{align}G_{t} &= \sum \limits_{k = 0}^{\infty} \gamma^{k} r_{t+k+1} \\&= r_{t+1} + \gamma G_{t+1}\end{align}where $\gamma$ is discount factor that controls the importance of future rewards, and $\gamma \in [0, 1]$. $\gamma$ may also be interpreted as probability of continuing the trajectory.* Value funtion $V_{\pi}(s_t=s)$: expecation of the return\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ G_{t}\; | \; s_t=s, a_{t:\infty}\sim\pi] \\& = \mathbb{E} [ r_{t+1} + \gamma G_{t+1}\; | \; s_t=s, a_{t:\infty}\sim\pi]\end{align}With an assumption of **Markov process**, we thus have:\begin{align}V_{\pi}(s_t=s) &= \mathbb{E} [ r_{t+1} + \gamma V_{\pi}(s_{t+1})\; | \; s_t=s, a_{t:\infty}\sim\pi] \\&= \sum_a \pi(a|s) \sum_{r, s'}p(s', r)(r + V_{\pi}(s_{t+1}=s'))\end{align}**Temporal difference (TD) learning*** With a Markovian assumption, we can use $V(s_{t+1})$ as an imperfect proxy for the true value $G_{t+1}$ (Monte Carlo bootstrapping), and thus obtain the generalised equation to calculate TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updated by using the learning rate constant $\alpha$:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} (Reference: https://web.stanford.edu/group/pdplab/pdphandbook/handbookch10.html)__Definitions:__* TD-error:\begin{align}\delta_{t} = r_{t+1} + \gamma V(s_{t+1}) - V(s_{t})\end{align}* Value updates:\begin{align}V(s_{t}) \leftarrow V(s_{t}) + \alpha \delta_{t}\end{align} Exercise 1: TD-learning with guaranteed rewards Implement TD-learning to estimate the state-value function in the classical-conditioning world with guaranteed rewards, with a fixed magnitude, at a fixed delay after the conditioned stimulus, CS. Save TD-errors over learning (i.e., over trials) so we can visualize them afterwards. In order to simulate the effect of the CS, you should only update $V(s_{t})$ during the delay period after CS. This period is indicated by the boolean variable `is_delay`. This can be implemented by multiplying the expression for updating the value function by `is_delay`.Use the provided code to estimate the value function.
###Code
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
########################################################################
## TODO for students: implement TD error and value function update
# Fill out function and remove
# raise NotImplementedError("Student excercise: implement TD error and value function update")
#################################################################################
# Write an expression to compute the TD-error
TDE[state, n] = reward + gamma * V[next_state] - V[state]
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
# Uncomment once the td_learner function is complete
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
learning_summary_plot(V, TDE)
#to_remove solution
def td_learner(env, n_trials, gamma=0.98, alpha=0.001):
""" Temporal Difference learning
Args:
env (object): the environment to be learned
n_trials (int): the number of trials to run
gamma (float): temporal discount factor
alpha (float): learning rate
Returns:
ndarray, ndarray: the value function and temporal difference error arrays
"""
V = np.zeros(env.n_steps) # Array to store values over states (time)
TDE = np.zeros((env.n_steps, n_trials)) # Array to store TD errors
for n in range(n_trials):
state = 0 # Initial state
for t in range(env.n_steps):
# Get next state and next reward
next_state, reward = env.get_outcome(state)
# Is the current state in the delay period (after CS)?
is_delay = env.state_dict[state][0]
# Write an expression to compute the TD-error
TDE[state, n] = (reward + gamma * V[next_state] - V[state])
# Write an expression to update the value function
V[state] += alpha * TDE[state, n] * is_delay
# Update state
state = next_state
return V, TDE
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
V, TDE = td_learner(env, n_trials=20000)
with plt.xkcd():
learning_summary_plot(V, TDE)
###Output
_____no_output_____
###Markdown
Interactive Demo 1: US to CS Transfer During classical conditioning, the subject's behavioral response (e.g., salivating) transfers from the unconditioned stimulus (US; like the smell of tasty food) to the conditioned stimulus (CS; like Pavlov ringing his bell) that predicts it. Reward prediction errors play an important role in this process by adjusting the value of states according to their expected, discounted return.Use the widget below to examine how reward prediction errors change over time. Before training (orange line), only the reward state has high reward prediction error. As training progresses (blue line, slider), the reward prediction errors shift to the conditioned stimulus, where they end up when the trial is complete (green line). Dopamine neurons, which are thought to carry reward prediction errors _in vivo_, show exactly the same behavior!
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
@widgets.interact
def plot_tde_by_trial(trial = widgets.IntSlider(value=5000, min=0, max=n_trials-1 , step=1, description="Trial #")):
if 'TDE' not in globals():
print("Complete Exercise 1 to enable this interactive demo!")
else:
fig, ax = plt.subplots()
ax.axhline(0, color='k') # Use this + basefmt=' ' to keep the legend clean.
ax.stem(TDE[:, 0], linefmt='C1-', markerfmt='C1d', basefmt=' ',
label="Before Learning (Trial 0)",
use_line_collection=True)
ax.stem(TDE[:, -1], linefmt='C2-', markerfmt='C2s', basefmt=' ',
label="After Learning (Trial $\infty$)",
use_line_collection=True)
ax.stem(TDE[:, trial], linefmt='C0-', markerfmt='C0o', basefmt=' ',
label=f"Trial {trial}",
use_line_collection=True)
ax.set_xlabel("State in trial")
ax.set_ylabel("TD Error")
ax.set_title("Temporal Difference Error by Trial")
ax.legend()
###Output
_____no_output_____
###Markdown
Interactive Demo 2: Learning Rates and Discount FactorsOur TD-learning agent has two parameters that control how it learns: $\alpha$, the learning rate, and $\gamma$, the discount factor. In Exercise 1, we set these parameters to $\alpha=0.001$ and $\gamma=0.98$ for you. Here, you'll investigate how changing these parameters alters the model that TD-learning learns.Before enabling the interactive demo below, take a moment to think about the functions of these two parameters. $\alpha$ controls the size of the Value function updates produced by each TD-error. In our simple, deterministic world, will this affect the final model we learn? Is a larger $\alpha$ necessarily better in more complex, realistic environments?The discount rate $\gamma$ applies an exponentially-decaying weight to returns occuring in the future, rather than the present timestep. How does this affect the model we learn? What happens when $\gamma=0$ or $\gamma \geq 1$?Use the widget to test your hypotheses.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
@widgets.interact
def plot_summary_alpha_gamma(alpha = widgets.FloatSlider(value=0.0001, min=0.001, max=0.1, step=0.0001, description="alpha"),
gamma = widgets.FloatSlider(value=0.980, min=0, max=1.1, step=0.010, description="gamma")):
env = ClassicalConditioning(n_steps=40, reward_magnitude=10, reward_time=10)
try:
V_params, TDE_params = td_learner(env, n_trials=20000, gamma=gamma, alpha=alpha)
except NotImplementedError:
print("Finish Exercise 1 to enable this interactive demo")
learning_summary_plot(V_params,TDE_params)
#to_remove explanation
"""
Alpha determines how fast the model learns. In the simple, deterministic world
we're using here, this allows the moodel to quickly converge onto the "true"
model that heavily values the conditioned stimulus. In more complex environments,
however, excessively large values of alpha can slow, or even prevent, learning,
as we'll see later.
Gamma effectively controls how much the model cares about the future: larger values of
gamma cause the model to weight future rewards nearly as much as present ones. At gamma=1,
the model weights all rewards, regardless of when they occur, equally and when greater than one, it
starts to *prefer* rewards in the future, rather than the present (this is rarely good).
When gamma=0, however, the model becomes greedy and only considers rewards that
can be obtained immediately.
""";
###Output
_____no_output_____
###Markdown
--- Section 2: TD-learning with varying reward magnitudesIn the previous exercise, the environment was as simple as possible. On every trial, the CS predicted the same reward, at the same time, with 100% certainty. In the next few exercises, we will make the environment more progressively more complicated and examine the TD-learner's behavior. Interactive Demo 3: Match the Value FunctionsFirst, will replace the environment with one that dispenses one of several rewards, chosen at random. Shown below is the final value function $V$ for a TD learner that was trained in an enviroment where the CS predicted a reward of 6 or 14 units; both rewards were equally likely). Can you find another pair of rewards that cause the agent to learn the same value function? Assume each reward will be dispensed 50% of the time. Hints:* Carefully consider the definition of the value function $V$. This can be solved analytically.* There is no need to change $\alpha$ or $\gamma$. * Due to the randomness, there may be a small amount of variation.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
n_trials = 20000
np.random.seed(2020)
rng_state = np.random.get_state()
env = MultiRewardCC(40, [6, 14], reward_time=10)
V_multi, TDE_multi = td_learner(env, n_trials, gamma=0.98, alpha=0.001)
@widgets.interact
def reward_guesser_interaction(r1 = widgets.IntText(value=0, min=0, max=50, description="Reward 1"),
r2 = widgets.IntText(value=0, min=0, max=50, description="Reward 2")):
try:
env2 = MultiRewardCC(40, [r1, r2], reward_time=10)
V_guess, _ = td_learner(env2, n_trials, gamma=0.98, alpha=0.001)
fig, ax = plt.subplots()
m, l, _ = ax.stem(V_multi, linefmt='y-', markerfmt='yo', basefmt=' ', label="Target",
use_line_collection=True)
m.set_markersize(15)
m.set_markerfacecolor('none')
l.set_linewidth(4)
m, _, _ = ax.stem(V_guess, linefmt='r', markerfmt='rx', basefmt=' ', label="Guess",
use_line_collection=True)
m.set_markersize(15)
ax.set_xlabel("State")
ax.set_ylabel("Value")
ax.set_title("Guess V(s)\n" + reward_guesser_title_hint(r1, r2))
ax.legend()
except NotImplementedError:
print("Please finish Exercise 1 first!")
###Output
_____no_output_____
###Markdown
Section 2.1 Examining the TD ErrorRun the cell below to plot the TD errors from our multi-reward environment. A new feature appears in this plot? What is it? Why does it happen?
###Code
plot_tde_trace(TDE_multi)
#to_remove explanation
"""
The TD trace now takes on negative values because the reward delivered is
sometimes larger than the expected reward and sometimes smaller.
""";
###Output
_____no_output_____
###Markdown
--- Section 3: TD-learning with probabilistic rewardsIn this environment, we'll return to delivering a single reward of ten units. However, it will be delivered intermittently: on 20 percent of trials, the CS will be shown but the agent will not receive the usual reward; the remaining 80% will proceed as usual. Run the cell below to simulate. How does this compare with the previous experiment?Earlier in the notebook, we saw that changing $\alpha$ had little effect on learning in a deterministic environment. What happens if you set it to an large value, like 1, in this noisier scenario? Does it seem like it will _ever_ converge?
###Code
np.random.set_state(rng_state) # Resynchronize everyone's notebooks
n_trials = 20000
try:
env = ProbabilisticCC(n_steps=40, reward_magnitude=10, reward_time=10,
p_reward=0.8)
V_stochastic, TDE_stochastic = td_learner(env, n_trials*2, alpha=1)
learning_summary_plot(V_stochastic, TDE_stochastic)
except NotImplementedError:
print("Please finish Exercise 1 first")
#to_remove explanation
"""
The multi-reward and probabilistic reward enviroments are the same. You
could simulate a probabilistic reward of 10 units, delivered 50% of the time,
by having a mixture of 10 and 0 unit rewards, or vice versa. The takehome message
from these last three exercises is that the *average* or expected reward is
what matters during TD learning.
Large values of alpha prevent the TD Learner from converging. As a result, the
value function seems implausible: one state may have an extremely low value while
the neighboring ones remain high. This pattern persists even if training continues
for hundreds of thousands of trials.
""";
###Output
_____no_output_____
###Markdown
--- SummaryIn this notebook, we have developed a simple TD Learner and examined how its state representations and reward prediction errors evolve during training. By manipualting its environment and parameters ($\alpha$, $\gamma$), you developed an intuition for how it behaves. This simple model closely resembles the behavior of subjects undergoing classical conditioning tasks and the dopamine neurons that may underlie that behavior. You may have implemented TD-reset or used the model to recreate a common experimental error. The update rule used here has been extensively studied for [more than 70 years](https://www.pnas.org/content/108/Supplement_3/15647) as a possible explanation for artificial and biological learning. However, you may have noticed that something is missing from this notebook. We carefully calculated the value of each state, but did not use it to actually do anything. Using values to plan _**Actions**_ is coming up next! Bonus Exercise 2: Removing the CSIn Exercise 1, you (should have) included a term that depends on the conditioned stimulus. Remove it and see what happens. Do you understand why?This phenomena often fools people attempting to train animals--beware!
###Code
#to_remove explanation
"""
You should only be updating the V[state] once the conditioned stimulus appears.
If you remove this term the Value Function becomes periodic, dropping towards zero
right after the reward and gradually rising towards the end of the trial. This
behavior is actually correct, because the model is learning the time until the
*next* reward, and State 37 is closer to a reward than State 21 or 22.
In an actual experiment, the animal often just wants rewards; it doesn't care about
/your/ experiment or trial structure!
""";
###Output
_____no_output_____ |
_notebooks/2021-07-20-webscrapping-youtube-blog.ipynb | ###Markdown
"Web Scrapping Popular Youtube Tech Channels with Selenium"> "Data Mining, Data Wrangling, Data Exploratory Analysis"- toc: false- badges: true- comments: true- categories: [Selenium, Web Scrapping, Pandas, Youtube, Python]- image: "images/thumbnails/header_youtube_web.png" Notebook Created by: __David Rusho__ ([Github Blog](https://drusho.github.io/blog) | [Tableau](https://public.tableau.com/app/profile/drusho/) | [Linkedin](https://linkedin.com/in/davidrusho)) About the DataWeb scraping was performed on the _Top 10 Tech Channels_ on Youtube using _[Selenium](https://selenium-python.readthedocs.io/)_ (an automated browser (driver) controlled using python, which is often used in web scraping and web testing). These channels were selected using a __[Top 10 Tech Youtubers](https://blog.bit.ai/top-tech-youtubers/)__ list from blog.bit.ai. Data from 2,000 videos was scrapped, which equals about 200 of most popular videos per channel. Introduction Collecting and Cleaning Data Web Scrapping Youtube Channels
###Code
#collapse
import pandas as pd
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
# Chrome driver location (for M1 macbook air)
DRIVER_PATH = "/opt/homebrew/bin/chromedriver"
# activate driver
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
# Scroll to bottom of page
def scroll_page():
for x in range(7):
html = driver.find_element_by_tag_name("html")
html.send_keys(Keys.END)
time.sleep(2)
def scrap_videos():
scroll_page()
chan_xpath = '//*[@id="channel-name"]'
subs_xpath = '//*[@id="subscriber-count"]'
videos_class = "style-scope ytd-grid-video-renderer"
views_xpath = './/*[@id="metadata-line"]/span[1]'
post_date_xpath = './/*[@id="metadata-line"]/span[2]'
title_xpath = './/*[@id="video-title"]'
# Scrap Channel Name
try:
channel_name = driver.find_element_by_xpath(chan_xpath).text
except (Exception,):
pass
# Scrap Number of Subscribers
try:
subscribers = driver.find_element_by_xpath(subs_xpath).text
except (Exception,):
pass
# Reassign variable to recalculate all videos
videos = driver.find_elements_by_class_name(videos_class)
# Loop through all videos
for video in videos:
# grab title if available
try:
title = video.find_element_by_xpath(title_xpath).text
except (Exception,):
pass
# grab url if available
try:
url = video.find_element_by_xpath(title_xpath).get_attribute("href")
except (Exception,):
pass
# grab views if available
try:
views = video.find_element_by_xpath(views_xpath).text
except (Exception,):
pass
# grab post date if available
try:
post_date = video.find_element_by_xpath(post_date_xpath).text
except (Exception,):
pass
video_items = {
"channel_name": channel_name,
"subscribers": subscribers,
"title": title,
"views": views,
"post_date": post_date,
"url": url,
}
vid_list.append(video_items)
return vid_list
# scrap Channel About section
def scrap_about():
chan_name_xp = '//*[@id="channel-name"]'
chan_join = './/*[@id="right-column"]/yt-formatted-string[2]/span[2]'
chan_views = './/*[@id="right-column"]/yt-formatted-string[3]'
chan_desc = './/*[@id="description"]'
# Scrap Channel Name
try:
channel_name = driver.find_element_by_xpath(chan_name_xp).text
except (Exception,):
pass
# Scrap Channel Join Date (about)
try:
channel_join = driver.find_element_by_xpath(chan_join).text
except (Exception,):
pass
# Scrap Channel Views (about)
try:
channel_views = driver.find_element_by_xpath(chan_views).text
except (Exception,):
pass
# Scrap Channel Description (about)
try:
channel_description = driver.find_element_by_xpath(chan_desc).text
except (Exception,):
pass
about_items = {
"channel_name": channel_name,
"channel_join_date": channel_join,
"channel_views": channel_views,
"channel_description": channel_description,
}
vid_list.append(about_items)
return vid_list
# top youtubers based off 'https://blog.bit.ai'
top_youtubers = [
"ijustine",
"AndroidAuthority",
"Mrwhosetheboss",
"TechnoBuffalo",
"TLD",
"austinevans",
"unboxtherapy",
"LinusTechTips",
"UrAvgConsumer",
"mkbhd",
]
# empty list to hold video details
vid_list = []
# url of most videos sorted by most popular
for youtuber in top_youtubers:
print(f"processing {youtuber}")
url = f"https://www.youtube.com/{youtuber}/videos?view=0&sort=p&flow=grid"
driver.get(url)
scroll_page()
vid_list = scrap_videos()
about_url = f"https://www.youtube.com/{youtuber}/about"
about = driver.get(about_url)
driver.implicitly_wait(10)
about_items = scrap_about()
# Close Chrome browser
driver.quit()
# create pandas df for video info
df_channel = pd.DataFrame(vid_list)
# export df to csv
df_channel.to_csv("yt_channel_scrap.csv")
###Output
_____no_output_____
###Markdown
Web Scrapping Youtube Videos
###Code
#collapse
import pandas as pd
import time
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from datetime import datetime
from requests import options
from selenium import webdriver
# driver options (size and headless)
options = Options()
options.add_argument("--headless")
options.add_argument("--window-size=1920x1080")
# Chrome driver location (for M1 macbook air)
DRIVER_PATH = "/opt/homebrew/bin/chromedriver"
# activate driver
driver = webdriver.Chrome(executable_path=DRIVER_PATH, options=options)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
# partial video description
def par_description():
vid_desc = "//div[@class='watch-main-col']/meta[@itemprop='description']"
elems = driver.find_elements_by_xpath(vid_desc)
for elem in elems:
return elem.get_attribute("content")
# publish_date
def publish():
pub_date = "//div[@class='watch-main-col']/meta[@itemprop='datePublished']"
elems = driver.find_elements_by_xpath(pub_date)
for elem in elems:
return elem.get_attribute("content")
# upload_date
def upload():
upload_date = "//div[@class='watch-main-col']/meta[@itemprop='uploadDate']"
elems = driver.find_elements_by_xpath(upload_date)
for elem in elems:
return elem.get_attribute("content")
# genre
def genre():
genre = "//div[@class='watch-main-col']/meta[@itemprop='genre']"
elems = driver.find_elements_by_xpath(genre)
for elem in elems:
return elem.get_attribute("content")
# video_width
def width():
v_width = "//div[@class='watch-main-col']/meta[@itemprop='width']"
elems = driver.find_elements_by_xpath(v_width)
for elem in elems:
return elem.get_attribute("content")
# video_height
def height():
v_height = "//div[@class='watch-main-col']/meta[@itemprop='height']"
elems = driver.find_elements_by_xpath(v_height)
for elem in elems:
return elem.get_attribute("content")
# Interaction Count
def interactions():
interactions = "//div[@class='watch-main-col']/meta[@itemprop='interactionCount']"
elems = driver.find_elements_by_xpath(interactions)
for elem in elems:
return elem.get_attribute("content")
# Video_title
def video_title():
video_title = "//div[@class='watch-main-col']/meta[@itemprop='name']"
elems = driver.find_elements_by_xpath(video_title)
for elem in elems:
return elem.get_attribute("content")
# Channel_name
def channel_name():
channel_name = (
"//div[@class='watch-main-col']/span[@itemprop='author']/link[@itemprop='name']"
)
elems = driver.find_elements_by_xpath(channel_name)
for elem in elems:
return elem.get_attribute("content")
# Number Likes
def likes():
likes_xpath = "(//div[@id='top-level-buttons-computed']//*[contains(@aria-label,' likes')])[last()]"
return driver.find_element_by_xpath(likes_xpath).text
# Total Comments
def comments():
# Move Page to display comments
# set scroll pause time
SCROLL_PAUSE_TIME = 0.5
# scroll to page bottom
driver.execute_script("window.scrollTo(0, 1080)")
# Wait for page load
time.sleep(SCROLL_PAUSE_TIME)
# scroll to page bottom
driver.execute_script("window.scrollTo(300, 1080)")
# Wait to load page
time.sleep(SCROLL_PAUSE_TIME)
com = WebDriverWait(driver, 10).until(
EC.presence_of_element_located(
(By.XPATH, '//*[@id="count"]/yt-formatted-string')
)
)
return com.text
# import csv of youtube channels data
df_channels = pd.read_csv(
"yt_channel_scrap.csv",
)
# new df of channel names and urls
df_videos = df_channels[["channel_name", "url"]].dropna()
# isolate video urls to a list
url_list = df_videos.url.to_list()
vid_list = []
url_fails_ls = []
count = 0
# # launch driver(s)
for url in url_list:
driver.get(url)
count += 1
time.sleep(3)
subscribe_button = '//*[@id="subscribe-button"]'
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.XPATH, subscribe_button))
)
try:
comments_num = comments()
likes_num = likes()
chan_name = channel_name()
v_duration = duration()
p_description = par_description()
publish_date = publish()
upload_date = upload()
v_genre = genre()
v_width = width()
v_height = height()
title = video_title()
interaction_count = interactions()
except:
print(f"EXCEPTION RAISED for {url}")
url_fails_ls.append(url)
pass
video_items = {
"url": url, # primary key
"Channel Name": chan_name,
"Title": title,
"Duration": v_duration,
"Partial Description": p_description,
"Publish Date": publish_date,
"Upload_date": upload_date,
"Genre": v_genre,
"Width": v_width,
"Height": v_height,
"Likes": likes_num,
"Comments": comments_num,
"Interaction Count": interaction_count,
}
vid_list.append(video_items)
# print(f"url {count} of {len(url_list)} complete")
# print every 10th url
if count % 10 == 0:
print(f"URL {count} of {len(url_list)} processed.")
driver.quit()
# # create dfs for video and failed urls
df_videos = pd.DataFrame(vid_list)
# store urls that failed to load in driver
url_fails_dict = {"url": url_fails_ls}
df_url_fails = pd.DataFrame(url_fails_dict)
print("Driver Quit")
print("Code Duration: {}".format(end_time - start_time))
print(f"Videos Processed: {len(vid_list)}")
print(f"Failures: {len(url_fails_ls)}")
# export df to csv
df_url_fails.to_csv(
"url_fails.csv"
)
df_videos.to_csv(
"yt_videos_scrap.csv"
)
###Output
_____no_output_____
###Markdown
Importing and Cleaning the Data__Note:__ Code in the cell below comes from [this notebook](https://colab.research.google.com/drive/1urQPIhLlr8U8LRB2pHQHkSuLyae60nly?usp=sharing) I created to originally clean and merge the data.
###Code
# collapse
import pandas as pd
# load channel csv
yt = pd.read_csv("yt_channel_scrap.csv", parse_dates=["channel_join_date"])
# create df of Channel details
channel_details = yt[yt.channel_join_date.notna()]
channel_details = channel_details.drop(
columns=["Unnamed: 0", "subscribers", "title", "views", "post_date"]
).reset_index(drop=True)
# create df Video details
video_details = yt[yt.channel_join_date.isna()]
video_details = video_details.drop(
columns=[
"Unnamed: 0",
"channel_join_date",
"channel_views",
"channel_description",
"post_date",
]
).reset_index(drop=True)
# merge dfs
merged = channel_details.merge(video_details, on="channel_name")
# drop 2nd url column and rename remaining url col
merged.drop(columns=("url_x"), inplace=True)
merged.rename(columns={"url_y": "url"}, inplace=True)
# dtypes to float for views and subscribers
merged.subscribers = (
merged.subscribers.str.replace("M subscribers", "").astype("float") * 1000000
)
# modify views col dtype to float
def fix_views(col):
if "M" in col:
return float(col.replace("M views", "")) * 1000000
elif "K" in col:
return float(col.replace("K views", "")) * 1000
elif "1 year ago" in col:
return 0
merged["views"] = merged["views"].apply(fix_views)
# Correct channel view column to display num only
merged["channel_views"] = (
merged["channel_views"].str.replace(",", "").str.replace(" views", "").astype("int")
)
# import Videos csv data
df_videos = pd.read_csv(
"yt_videos_scrap_big_data.csv", parse_dates=["Publish Date", "Upload_date"]
)
df_videos.drop(
columns=["Unnamed: 0", "Duration", "Channel Name", "Title"], inplace=True
)
# comments dytpe to int
df_videos["Comments"] = (
df_videos["Comments"].str.replace("Comments", "").str.replace(",", "").astype("int")
)
# modify likes col dtype to float
def fix_likes(col):
if "M" in col:
return float(col.replace("M", "")) * 1000000
elif "K" in col:
return float(col.replace("K", "")) * 1000
else:
return float(col)
# Fix Likes Column
df_videos["Likes"] = df_videos["Likes"].apply(fix_likes)
# Fix Width and Height, remove '.' and '0' from end of str
df_videos["Width"] = df_videos["Width"].astype("str").str.split(".", expand=True)[0]
df_videos["Height"] = df_videos["Height"].astype("str").str.split(".", expand=True)[0]
vc_merged = merged.merge(df_videos, on="url")
# rename columns to increase readability in analysis plots and tables
vc_merged.rename(
columns={
"channel_name": "Channel Name",
"channel_join_date": "Channel Join Date",
"channel_views": "Channel Views (M)",
"subscribers": "Subscribers (M)",
"Interaction Count": "Interactations (M)",
"views": "Video Views (M)",
"Partial Description": "Video Desc",
"Publish Date": "Publish Date",
"Upload_date": "Upload Date",
"Genre": "Video Genre",
"Width": "Width",
"Height": "Height",
"Comments": "Video Comments",
"title": "Video Title",
"url": "Video URL",
},
inplace=True,
)
###Output
_____no_output_____
###Markdown
Data Cleaning CompleteFully cleaned and merged data from Youtubes Channels and all Videos.__Note:__ The columns _Channel Views (M)_, _Subscribers (M)_, _Video Views (M)_, and _Interactions (M)_ are in millions. * Example: The iJustine Channel has 6.89 **M** subscribers.
###Code
# hide
# shorten column numbers length by millions
import pandas as pd
df = pd.read_csv(
"yt_web_scrap_cleaned.csv",
parse_dates=["Publish Date", "Upload Date", "Channel Join Date"])
df.head(2)
###Output
_____no_output_____
###Markdown
hide Column Descriptions|Column Name | Description ||:--|:--||Channel Name|Name of Youtube Channel ||Channel Join Date|Date Channel was created||Channel Views (M)|Total views the channel has received (in millions)||Channel Description|Description of Youtube Channel||Subscribers (M)|Number of channel subscribers (in millions)||Video Title|Video title||Video Views (M)|Total views for video (in millions)||Video URL|Video url||Video Desc|Description of video||Publish Date|Date video was published||Upload Date|Date video was uploaded||Video Genre|Genre of video||Width|Width of video||Height|Height of video||Likes|Total likes for video||Video Comments|Total comments for video||Interactions (M)|Number of interactions video has received (in millions)| Data Analysis Youtube Channels Ordered by Join Date__Note 1 :__ Join Date is the date that the Youtube Channel was created.__Note 2:__ Join Date does not seem to have any relationship to number of subscribers or overall channel views
###Code
# collapse
# List of Video Channels
yt_chan_jn = (
df.groupby(["Channel Join Date", "Channel Name", "Channel Views (M)"])[
"Subscribers (M)"
]
.max()
.to_frame()
.reset_index()
)
# rename columns to increase readability
yt_chan_jn.rename(
columns={
"Channel Name": "Channel",
"Channel Join Date": "Join Date",
"Subscribers (M)": "Subscribers",
"Channel Views (M)": "Channel Views",
},
inplace=True,
)
yt_chan_jn
# # style dateframe to highlight highest values
yt_chan_jn = yt_chan_jn.style.format(
formatter={"Subscribers": "{:,} M", "Channel Views": "{:,} M"}
).background_gradient(
subset=["Channel Views", "Subscribers"], cmap="Wistia"
).set_caption(
"Youtube Channels Ordered by Join Date"
).set_table_styles(
[dict(selector="caption", props=[("text-align", "center"), ("font-size", "125%")])]
).hide_index()
yt_chan_jn
###Output
_____no_output_____
###Markdown
Top 10 Most Viewed Videos__Note:__ ___70%___ of the videos in this list are about phones.
###Code
# collapse
# Top 10 Videos by Views
top_vwd_chan = (
df.groupby(["Video Title", "Channel Name", "Publish Date"])["Video Views (M)"]
.max()
.sort_values(ascending=False)
.head(10)
.reset_index()
)
# rename columns to increase readability
top_vwd_chan.rename(
columns={"Channel Name": "Channel", "Video Views (M)": "Video Views"}, inplace=True
)
top_vwd_chan.style.format(
formatter={"Video Views": "{:,} M", "Publish Date": "{:%Y-%m-%d}"}
).background_gradient(
subset=["Video Views", "Publish Date"], cmap="Wistia"
).set_caption(
"Top 10 Youtube Videos by Views"
).set_table_styles(
[dict(selector="caption", props=[("text-align", "center"), ("font-size", "125%")])]
).hide_index()
###Output
_____no_output_____
###Markdown
Youtube Channels Grouped by Total Video ViewsSum of all videos views for each channel.__Note:__ There is an obvious relationship between ___Subscribers___ and ___Video View___ counts.
###Code
# collapse
# Total Views by Channel
chan_views = (
df.groupby(["Channel Name", "Subscribers (M)"])["Video Views (M)"]
.sum()
.sort_values(ascending=False)
.reset_index()
)
# rename columns to increase readability
chan_views.rename(
columns={
"Channel Name": "Channel",
"Video Views (M)": "Video Views",
"Subscribers (M)": "Subscribers",
},
inplace=True,
)
chan_views.style.format(
formatter={
"Video Views": "{:,}",
"Video Views": "{0:,.0f} M",
"Subscribers": "{:,} M",
}
).background_gradient(subset=["Video Views", "Subscribers"], cmap="Wistia").set_caption(
"Channels Grouped by Video Views"
).set_table_styles(
[dict(selector="caption", props=[("text-align", "center"), ("font-size", "125%")])]
).hide_index()
###Output
_____no_output_____
###Markdown
Top 10 Liked Videos__Note 1:__ The following top 10 liked videos don't review a tech product.* ["Reflecting on the Color of My Skin"](https://www.youtube.com/watch?v=o-_WXXVye3Y) created by __Marques Brownlee__ * ["I've been thinking of retiring"](https://www.youtube.com/watch?v=hAsZCTL__lo) created by **Linus Tech Tips** __Note 2:__ Mrwhosetheboss capitalizes _"THIS"_ is a lot of their video titles.
###Code
# collapse
# Top 10 Videos by Views
top_lkd_chan = (
df.groupby(["Video Title", "Channel Name", "Publish Date"])["Likes"]
.max()
.sort_values(ascending=False)
.head(10)
.reset_index()
)
# rename columns to increase readability
top_lkd_chan.rename(columns={"Channel Name": "Channel"}, inplace=True)
top_lkd_chan.style.format(
formatter={"Likes": "{:,}", "Publish Date": "{:%Y-%m-%d}"}
).background_gradient(subset=["Likes", "Publish Date"], cmap="Wistia").set_caption(
"Top 10 Liked Videos"
).set_table_styles(
[dict(selector="caption", props=[("text-align", "center"), ("font-size", "125%")])]
).hide_index()
###Output
_____no_output_____
###Markdown
Videos Likes per Video (Scatter Plot)
###Code
# collapse
# Top Video Likes Over Time (Scatter Plot)
import plotly.express as px
import plotly.graph_objects as go
# set global plot colors
# plotly marker colors
mcolors = "#1f77b4" # light blue
# wordcloud letters
cmaps = "Wistia"
cmaps_r = "Wistia_r"
# plotly backround
wtbckgnd = {"plot_bgcolor": "rgba(255,255,255, 0.9)"} # white background
blkbackground = {"plot_bgcolor": "rgba(0, 0, 0, 0.5)"} # black background
fig = px.scatter(
df,
y="Likes",
x="Publish Date",
color="Likes",
hover_name="Video Title",
hover_data=["Channel Name"],
color_continuous_scale="solar_r",
)
fig.update_layout(
wtbckgnd, # set background to white
title={
"text": "Top Video Likes Over Time",
"y": 0.88,
"x": 0.5,
"xanchor": "center",
"yanchor": "top",
},
xaxis_title="Video Publish Date",
yaxis_title="No. of Likes",
)
fig.show()
###Output
_____no_output_____
###Markdown
Word Frequency for 2,000 Video Titles (Bar Plot)
###Code
#hide
pip install texthero
# collapse
import texthero as hero
from texthero import preprocessing
from texthero import stopwords
# create a custom cleaning pipeline
custom_pipeline = [
preprocessing.fillna
# , preprocessing.lowercase
,
preprocessing.remove_digits,
preprocessing.remove_punctuation,
preprocessing.remove_diacritics
# , preprocessing.remove_stopwords
,
preprocessing.remove_whitespace,
]
# , preprocessing.stem]
default_stopwords = stopwords.DEFAULT
# add a list of stopwords to the stopwords
custom_stopwords = default_stopwords.union(set(["The", "vs"]))
# pass the custom_pipeline to the pipeline argument
df["clean_title"] = hero.clean(df["Video Title"], pipeline=custom_pipeline)
# Call remove_stopwords and pass the custom_stopwords list
df["clean_title"] = hero.remove_stopwords(df["clean_title"], custom_stopwords)
tw = hero.visualization.top_words(df["clean_title"]).head(10).to_frame()
tw.reset_index(inplace=True)
tw.rename(columns={"index": "word", "clean_title": "freq"}, inplace=True)
fig = go.Figure([go.Bar(x=tw.word, y=tw.freq, textposition="auto")])
fig.update_layout(
wtbckgnd, # set background to white
title={
"text": "Word Frequency for 2,000 Video Titles",
"y": 0.88,
"x": 0.5,
"xanchor": "center",
"yanchor": "top",
},
yaxis=dict(title="Word Count"),
)
fig.update_traces(marker_color="orange")
###Output
_____no_output_____
###Markdown
Word Frequency in Video Titles (Word Cloud)
###Code
# collapse
import texthero as herofrom
# Word cloud of top words from clean_title
herofrom.wordcloud(
df.clean_title,
max_words=200,
# contour_color='red',
background_color="white",
colormap="Oranges",
height=500,
width=800,
)
###Output
_____no_output_____
###Markdown
Using Kmeans and PCA to Cluster of Video Titles __Note:__ Video titles fall into 5 topic groups. * Iphone (kmeans 0) * Samsung (kmeans 1) * Reviews (kmeans 2) * Unboxing (kmeans 3) * How-to (kmeans 4)
###Code
# collapse
# Add pca value to dataframe to use as visualization coordinates
df["pca"] = df["clean_title"].pipe(hero.tfidf).pipe(hero.pca)
# Add k-means cluster to dataframe
df["kmeans"] = df["clean_title"].pipe(hero.tfidf).pipe(hero.kmeans)
hero.scatterplot(df, "pca", color="kmeans", hover_data=["Video Title"])
###Output
_____no_output_____
###Markdown
Correlations__Note:__ The following seems to be highly correlated. * Channel Views and Subscribers * Interactions and Video Views
###Code
#collapse
df.drop(columns=["Unnamed: 0","Width","Height"]).corr().style.background_gradient(
subset=[
"Channel Views (M)",
"Subscribers (M)",
"Video Views (M)",
"Likes",
"Video Comments",
"Interactations (M)",
],
cmap="Wistia",
)
###Output
_____no_output_____ |
Pandas_Inuron/pandas_groupby.ipynb | ###Markdown
(1) Splitting the data into groups. (2). Applying a function to each group independently, (3) Combining the results into a data structure. (1) Splitting the data into groups (2). Applying a function to each group independently, In the apply step, we might wish to do one of the following: Aggregation: compute a summary statistic for each group. for example, sum, mean, or count. Transformation: perform some group-specific computations and return a like-indexed object. For example, standardize data within a group or replacing missing values within groups.Filtration: discard some groups, according to a group-wise computation that evaluates True or False. For example, discard data that belongs to groups with only a few members or filter out data based on the group sum or mean. Aggregation With column name
###Code
#To perform aggregation on a specific column
df.groupby('Sex').Age.max()
#simply by using max() method
type(df.groupby('Sex')['Age'].max())
#simply by using agg([]) method also we can schiebve those
##it will return dataframe
df.groupby('Sex').Age.agg(['max'])
type(df.groupby('Sex').Age.agg(['max']))
##calcumlate the different statistics together
df.groupby('Sex').Age.agg(['max', 'min', 'count', 'median', 'mean'])
###customize column name
df.groupby('Sex').Age.agg(
sex_max=('max'),
sex_min=('min'),
sex_count=('count'),
)
# use a custom aggregation function:
def categorize(x):
m = x.mean()
return True if m > 29 else False
df.groupby('Sex').Age.agg(['max', 'mean', categorize])
## We can also use a lambda expression as well
df.groupby('Sex').Age.agg(
['max', 'mean', lambda x: True if x.mean() > 50 else False]
)
###Output
_____no_output_____
###Markdown
Without column name
###Code
# we don’t actually have to specify a column like Age. Without a column,
#it will perform the aggregation across all of the numeric columns
df.groupby('Sex').mean()
#Similarly, we can call agg() without a column.
df.groupby('Sex').agg(['mean', 'median'])
###Output
_____no_output_____
###Markdown
3. Transforming data
###Code
#Transformation is a process in which we perform some group-specific computations and return a like-indexed (same length) object.
#When looking for transforming data, transform() and apply() are the most commonly used functions.
##group by standerisation
standardization = lambda x: (x - x.mean()) / x.std()
df.groupby('Sex').Age.transform(standardization)
# we can use apply function as well to do the same
df.groupby('Sex').Age.apply(standardization)
###Output
_____no_output_____
###Markdown
4. Filtration Filtration is a process in which we discard some groups, according to a group-wise computation that evaluates True or False.
###Code
df.head()
df.groupby('Cabin').size()
#Now let’s filter data to return all passengers that lived in a cabin has ≥ 4 people.
df.groupby('Cabin').filter(lambda x : len(x) >=4 )
###Output
_____no_output_____
###Markdown
6. Grouping by multiple categories Instead of a label, we can also pass a list of labels to work with multiple grouping.
###Code
# Creating a subset
df_subset = df.loc[:, ['Sex', 'Pclass', 'Age', 'Fare']]
# Group by multiple categories
df_subset.groupby(['Sex', 'Pclass']).mean()
###Output
_____no_output_____
###Markdown
7. Resetting index with as_index Grouping by multiple categories will result in a MultiIndex DataFrame. However, it is not practical to have Sex and Pclass columns as the index (See image above) when we need to perform some data analysis.
###Code
subset=df_subset.groupby(['Sex', 'Pclass']).mean()
subset
# Resetting index
subset.reset_index()
#Anaother way
subset.groupby(['Sex', 'Pclass'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
8. Handling missing values The groupby() function ignores the missing values by default.lets create some NaN value to the sex columns
###Code
# Creating a subset
df_subset = df.loc[:, ['Sex', 'Pclass', 'Age', 'Fare']]
df_subset.head()
df_subset.shape
df_subset.iloc[80:100, 0]=np.nan
df_subset.iloc[80:100, 0]
df_subset.isna().sum()
# The groupby function ignores the missing values by default.
df_subset.groupby(['Sex', 'Pclass']).mean()
#In some cases, we also need to get an overview of the missing values. We can set the dropna argument
#to False to include missing values.
df_subset.groupby(['Sex', 'Pclass'], dropna=False).mean()
###Output
_____no_output_____ |
Reliability_Re_Annotation_Process.ipynb | ###Markdown
Reliability Re-Annotation Process
###Code
#load GOLD Standard Corpus
trainDF['text'] = trainDF['text'].replace('\n','', regex=True)
trainDF['text'] = trainDF['text'].replace('\n',' ', regex=True)
trainDF['text'] = trainDF['text'].replace(r'\\n',' ', regex=True)
trainDF
###Output
_____no_output_____
###Markdown
Train Gold Corpus
###Code
from sklearn import model_selection
train_x,test_x,train_y,test_y = model_selection.train_test_split(trainDF,trainDF,test_size=0.15,random_state=11)
train_x.reset_index(drop=True,inplace=True)
test_x.reset_index(drop=True,inplace=True)
train_x.shape
###Output
_____no_output_____
###Markdown
Build Flair GOLD *Corpus*
###Code
from flair.data import Corpus
from flair.datasets import SentenceDataset
from flair.data import Sentence
train_labeled=[]
for i in range(len(train_x['text'])):
sentence = Sentence(train_x['text'][i]).add_label('reliability', train_x['label'][i])
train_labeled.append(sentence)
test_labeled=[]
for i in range(len(test_x['text'])):
sentence = Sentence(test_x['text'][i]).add_label('reliability', test_x['label'][i])
test_labeled.append(sentence)
# training dataset consisting of four sentences (2 labeled as "food" and 2 labeled as "drink")
train = SentenceDataset(train_labeled)
test = SentenceDataset(test_labeled)
# make a corpus with train and test split
corpus = Corpus(train=train, test=test)
print(len(corpus.test))
print(len(corpus.test))
print(len(corpus.train))
print(len(test_labeled))
print(len(train_labeled))
###Output
72
363
72
403
###Markdown
Training
###Code
from flair.data import Corpus
#from flair.datasets import TREC_6
from flair.embeddings import WordEmbeddings, FlairEmbeddings, DocumentRNNEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer
#this code is used in this notebook of gold training
from flair.trainers import ModelTrainer
from flair.models.text_classification_model import TARSClassifier
# 1. load base TARS
tars = TARSClassifier.load('tars-base')
# 2. make the model aware of the desired set of labels from the new corpus
tars.add_and_switch_to_new_task("TRUE_FAKE", label_dictionary=corpus.make_label_dictionary())
# 3. initialize the text classifier trainer with your corpus
trainer = ModelTrainer(tars, corpus)
# 4. train model
trainer.train(base_path='result/gold', # path to store the model artifacts
learning_rate=0.02, # use very small learning rate
mini_batch_size=1, # small mini-batch size since corpus is tiny
max_epochs=10, # terminate after 10 epochs
train_with_dev=True,
)
###Output
2021-05-11 20:24:45,485 https://nlp.informatik.hu-berlin.de/resources/models/tars-base/tars-base-v8.pt not found in cache, downloading to /tmp/tmpub16jujd
###Markdown
annotate sample curpus
###Code
trainDF
sample_sent=[]
for i in range(len(trainDF['text'])):
sentence = Sentence(trainDF['text'][i])
sample_sent.append(sentence)
sample_sent
sample_annotated=[]
for i in range(len(trainDF['text'])):
classifier.predict(sample_sent[i])
sample_annotated.append(sample_sent[i].labels)
sample_annotated
trainDF['sample_annotated']=sample_annotated
sample_sent[0].labels[0].to_dict()['value']
sample_sent[0].labels[0].to_dict()['confidence']
scory=[]
labelnew=[]
for i in range(len(trainDF['text'])):
scory.append(sample_sent[i].labels[0].to_dict()['confidence'])
labelnew.append(sample_sent[i].labels[0].to_dict()['value'])
trainDF['labelnew']=labelnew
trainDF['scory']=scory
trainDF
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
cm = confusion_matrix(trainDF['label'], trainDF['labelnew'])
print(cm)
cr = classification_report(trainDF['label'], trainDF['labelnew'])
print(cr)
###Output
[[1709 606]
[ 53 2442]]
precision recall f1-score support
FAKE 0.97 0.74 0.84 2315
TRUE 0.80 0.98 0.88 2495
accuracy 0.86 4810
macro avg 0.89 0.86 0.86 4810
weighted avg 0.88 0.86 0.86 4810
|
code/chap06-JLS.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 6: AnalysisCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
Code from the previous chapter `make_system`, `plot_results`, and `calc_total_infected` are unchanged.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= np.sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Resistant')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
def calc_total_infected(system):
"""Fraction of population infected during the simulation.
system: System object with results.
returns: fraction of population
"""
frame = system.results
return frame.S[system.t0] - frame.S[system.t_end]
###Output
_____no_output_____
###Markdown
Here's an updated version of `run_simulation` that uses `unpack`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.loc[t0] = init
for i in linrange(t0, t_end):
frame.loc[i+1] = update_func(frame.loc[i], system)
system.results = frame
###Output
_____no_output_____
###Markdown
**Exercise:** Write a version of `update1` that uses `unpack`.
###Code
# Original
'''
def update1(state, system):
"""Update the SIR model.
state: State (s, i, r)
system: System object
returns: State (sir)
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
'''
def update1(state, system):
"""Update the SIR model.
state: State (s, i, r)
system: System object
returns: State (sir)
"""
unpack(system)
s, i, r = state
infected = beta * i * s
recovered = gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
Test the updated code with this example.
###Code
system = make_system(0.333, 0.25)
run_simulation(system, update1)
system.results.head()
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
Sweeping beta Make a range of values for `beta`, with constant `gamma`.
###Code
beta_array = linspace(0.1, 0.9, 11)
gamma = 0.25
###Output
_____no_output_____
###Markdown
Run the simulation once for each value of `beta` and print total infections.
###Code
for beta in beta_array:
system = make_system(beta, gamma)
run_simulation(system, update1)
print(system.beta, calc_total_infected(system))
###Output
0.1 0.00723090166498
0.18 0.0262722567457
0.26 0.160575485321
0.34 0.490862856866
0.42 0.689867847411
0.5 0.804506112463
0.58 0.873610307851
0.66 0.916554007142
0.74 0.943729262152
0.82 0.961060480958
0.9 0.972099315633
###Markdown
Wrap that loop in a function and return a `SweepSeries` object.
###Code
def sweep_beta(beta_array, gamma):
"""SweepSeriess a range of values for beta.
beta_array: array of beta values
gamma: recovery rate
returns: SweepSeries that maps from beta to total infected
"""
sweep = SweepSeries()
for beta in beta_array:
system = make_system(beta, gamma)
run_simulation(system, update1)
sweep[system.beta] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
SweepSeries `beta` and plot the results.
###Code
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected')
savefig('chap06-fig01.pdf')
###Output
Saving figure to file chap06-fig01.pdf
###Markdown
Sweeping gamma Using the same array of values for `beta`
###Code
beta_array = linspace(0.1, 0.9, 11)
beta_array
###Output
_____no_output_____
###Markdown
And now an array of values for `gamma`
###Code
gamma_array = linspace(0.1, 0.7, 4)
gamma_array
###Output
_____no_output_____
###Markdown
For each value of `gamma`, SweepSeries `beta` and plot the results.
###Code
for gamma in gamma_array:
infected_sweep = sweep_beta(beta_array, gamma)
label = 'gamma = ' + str(gamma)
plot(infected_sweep, label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected',
loc='upper left')
savefig('chap06-fig02.pdf')
###Output
Saving figure to file chap06-fig02.pdf
###Markdown
Now wrap that loop in a function and store the results in a `SweepSeriesFrame`
###Code
def sweep_parameters(beta_array, gamma_array):
"""SweepSeriess a range of values for beta and gamma.
beta_array: array of infection rates
gamma_array: array of recovery rates
returns: SweepSeriesFrame with one row for each beta
and one column for each gamma
"""
frame = SweepFrame(columns=gamma_array)
for gamma in gamma_array:
frame[gamma] = sweep_beta(beta_array, gamma)
return frame
###Output
_____no_output_____
###Markdown
Here's what the results look like.
###Code
frame = sweep_parameters(beta_array, gamma_array)
frame.head()
###Output
_____no_output_____
###Markdown
And here's how we can plot the results.
###Code
for gamma in gamma_array:
label = 'gamma = ' + str(gamma)
plot(frame[gamma], label=label)
decorate(xlabel='Contacts per day (beta)',
ylabel='Fraction infected',
loc='upper left')
###Output
_____no_output_____
###Markdown
It's often useful to separate the code that generates results from the code that plots the results, so we can run the simulations once, save the results, and then use them for different analysis, visualization, etc. Contact number After running the SweepSeriess, we have a `SweepSeriesFrame` with one row for each value of `beta` and one column for each value of `gamma`.
###Code
frame.shape
###Output
_____no_output_____
###Markdown
The following loop shows how we can loop through the columns and rows of the `SweepSeriesFrame`. With 11 rows and 4 columns, there are 44 elements.One implementation note: when we select a column from a `SweepSeriesFrame` we get a `Series` object, rather than a `SweepSeries` object, but they are almost the same.
###Code
for gamma in frame.columns:
series = frame[gamma]
for beta in series.index:
frac_infected = series[beta]
print(beta, gamma, frac_infected)
###Output
0.1 0.1 0.0846929424381
0.18 0.1 0.70862278537
0.26 0.1 0.900780251778
0.34 0.1 0.956887899544
0.42 0.1 0.977045257074
0.5 0.1 0.984595862826
0.58 0.1 0.987400345318
0.66 0.1 0.988404249064
0.74 0.1 0.988743421406
0.82 0.1 0.988849515052
0.9 0.1 0.988879570517
0.1 0.3 0.00544355912239
0.18 0.3 0.0159140691448
0.26 0.3 0.0553797621068
0.34 0.3 0.267864167733
0.42 0.3 0.524562935844
0.5 0.3 0.686050483916
0.58 0.3 0.788378556339
0.66 0.3 0.85506574641
0.74 0.3 0.89947913569
0.82 0.3 0.929469302619
0.9 0.3 0.949853310327
0.1 0.5 0.00273576554115
0.18 0.5 0.00611834135832
0.26 0.5 0.0116394693217
0.34 0.5 0.0221147665242
0.42 0.5 0.0478162266689
0.5 0.5 0.132438038458
0.58 0.5 0.303264192648
0.66 0.5 0.464110227319
0.74 0.5 0.588476972528
0.82 0.5 0.682749610978
0.9 0.5 0.754595298329
0.1 0.7 0.001826769347
0.18 0.7 0.00378256160842
0.26 0.7 0.00642667221076
0.34 0.7 0.0101905519335
0.42 0.7 0.0159458265615
0.5 0.7 0.0257079250464
0.58 0.7 0.0450077531168
0.66 0.7 0.0906940688294
0.74 0.7 0.189795211656
0.82 0.7 0.318343186735
0.9 0.7 0.436999374456
###Markdown
Now we can wrap that loop in a function and plot the results. For each element of the `SweepSeriesFrame`, we have `beta`, `gamma`, and `frac_infected`, and we plot `beta/gamma` on the x-axis and `frac_infected` on the y-axis.
###Code
def plot_sweep_frame(frame):
"""Plots the values from a parameter SweepSeries.
For each (beta, gamma), computes the contact number,
beta/gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
series = frame[gamma]
for beta in series.index:
frac_infected = series[beta]
plot(beta/gamma, frac_infected, 'ro',
label='Simulation')
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_sweep_frame(frame)
decorate(xlabel='Contact number (beta/gamma)',
ylabel='Fraction infected',
legend=False)
savefig('chap06-fig03.pdf')
###Output
Saving figure to file chap06-fig03.pdf
###Markdown
It turns out that the ratio `beta/gamma`, called the "contact number" is sufficient to predict the total number of infections; we don't have to know `beta` and `gamma` separately.We can see that in the previous plot: when we plot the fraction infected versus the contact number, the results fall close to a curve.But if we didn't know about the contact number, we might have explored other possibilities, like the difference between `beta` and `gamma`, rather than their ratio. **Exercise:** Write a version of `plot_sweep_frame`, called `plot_sweep_frame_difference`, that plots the fraction infected versus the difference `beta-gamma`.What do the results look like, and what does that imply?
###Code
def plot_sweep_frame_difference(frame):
"""Plots the values from a parameter SweepSeries.
For each (beta, gamma), computes the excess contact value,
beta-gamma
frame: SweepFrame with one row per beta, one column per gamma
"""
for gamma in frame.columns:
series = frame[gamma]
for beta in series.index:
frac_infected = series[beta]
plot(beta-gamma, frac_infected, 'ro',
label='Simulation')
plot_sweep_frame_difference(frame)
decorate(xlabel='Excess contact value (beta-gamma)',
ylabel='Fraction infected',
legend=False)
###Output
_____no_output_____
###Markdown
Analysis In the book we figured out the relationship between $c$ and $s_{\infty}$ analytically. Now we can compute it for a range of values:
###Code
s_inf_array = linspace(0.0001, 0.9999, 101)
#s_inf_array
c_array = log(s_inf_array) / (s_inf_array - 1)
#c_array
###Output
_____no_output_____
###Markdown
`total_infected` is the change in $s$ from the beginning to the end.
###Code
frac_infected = 1 - s_inf_array
frac_infected_series = Series(frac_infected, index=c_array)
###Output
_____no_output_____
###Markdown
Now we can plot the analytic results and compare them to the simulations.
###Code
plot_sweep_frame(frame)
plot(frac_infected_series, label='Analysis')
decorate(xlabel='Contact number (c)',
ylabel='Fraction infected')
savefig('chap06-fig04.pdf')
###Output
Saving figure to file chap06-fig04.pdf
###Markdown
The agreement is generally good, except for values of `c` less than 1. **Exercise:** Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point. What is your best estimate of `c`?Hint: if you print `frac_infected_series`, you can read off the answer.
###Code
'''
s values represent numbers of uninfected students,
so the s value for 26% of students infected is .74.
using the equation from before, c = log(s)/(s-1) :
'''
c = log(.74) / (.74 - 1)
c
'''
Also, I DID run a survey, and my results say that
60.4% of people got sick and 39.6% of people did not.
SO:
'''
c = log(.396) / (.396 - 1)
c
# Alternative solution
"""We can use `np.interp` to look up `s_inf` and
estimate the corresponding value of `c`, but it only
works if the index of the series is sorted in ascending
order. So we have to use `sort_index` first.
"""
frac_infected_series.sort_index(inplace=True)
np.interp(0.26, frac_infected_series, frac_infected_series.index)
###Output
_____no_output_____ |
Univariate Exploration.ipynb | ###Markdown
Bar ChartsA bar chart is used to depict the distribution of a categorical variable. In a bar chart, each level of the categorical variable is depicted with a bar, whose height indicates the frequency of data points that take on that level
###Code
sb.countplot(data = pokemon, x = 'generation_id')
###Output
_____no_output_____
###Markdown
Changing color palette
###Code
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, x = 'generation_id', color = base_color)
###Output
_____no_output_____
###Markdown
For nominal-type data, one common operation is to sort the data in terms of frequency
###Code
base_color = sb.color_palette()[0]
cat_order = pokemon['generation_id'].value_counts().index
sb.countplot(data = pokemon, x = 'generation_id', color = base_color, order = cat_order)
###Output
_____no_output_____
###Markdown
For ordinal-type data, we probably want to sort the bars in order of the variables. While we could sort the levels by frequency like above, we usually care about whether the most frequent values are at high levels, low levels, etc.
###Code
# this method requires pandas v0.21 or later
level_order = [1,2,3,4,5,6,7]
ordered_cat = pd.api.types.CategoricalDtype(ordered = True, categories = level_order)
pokemon['generation_id'] = pokemon['generation_id'].astype(ordered_cat)
# # use this method if you have pandas v0.20.3 or earlier
# df['cat_var'] = df['cat_var'].astype('category', ordered = True,
# categories = level_order)
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, x = 'generation_id', color = base_color)
###Output
_____no_output_____
###Markdown
Additional Variations
###Code
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, y = 'type_1', color = base_color)
base_color = sb.color_palette()[0]
type_order = pokemon["type_1"].value_counts().index
sb.countplot(data = pokemon, x = 'type_1', color = base_color, order = type_order);
plt.xticks(rotation = 90);
###Output
_____no_output_____
###Markdown
Absolute vs. Relative FrequencyBy default, seaborn's countplot function will summarize and plot the data in terms of absolute frequency, or pure counts. In certain cases, you might want to understand the distribution of data or want to compare levels in terms of proportions of the whole. In this case, you will want to plot the data in terms of relative frequency, where the height indicates the proportion of data taking each level, rather than the absolute count. One method of plotting the data in terms of relative frequency on a bar chart is to just relabel the counts axis in terms of proportions. The underlying data will be the same, it will simply be the scale of the axis ticks that will be changed.
###Code
# get proportion taken by most common group for derivation
# of tick marks
n_points = pokemon.shape[0]
max_count = pokemon['generation_id'].value_counts().max()
max_prop = max_count / n_points
# generate tick mark locations and names
tick_props = np.arange(0, max_prop, 0.05)
tick_names = ['{:0.2f}'.format(v) for v in tick_props]
# create the plot
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, x = 'generation_id', color = base_color)
plt.yticks(tick_props * n_points, tick_names)
plt.ylabel('proportion')
###Output
_____no_output_____
###Markdown
The xticks and yticks functions aren't only about rotating the tick labels. You can also get and set their locations and labels as well. The first argument takes the tick locations: in this case, the tick proportions multiplied back to be on the scale of counts. The second argument takes the tick names: in this case, the tick proportions formatted as strings to two decimal places.I've also added a ylabel call to make it clear that we're no longer working with straight counts. Additional VariationRather than plotting the data on a relative frequency scale, you might use text annotations to label the frequencies on bars instead. This requires writing a loop over the tick locations and labels and adding one text element for each bar.
###Code
# create the plot
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, x = 'type_1', color = base_color)
# add annotations
n_points = pokemon.shape[0]
cat_counts = pokemon['type_1'].value_counts()
locs, labels = plt.xticks() # get the current tick locations and labels
plt.xticks(rotation = 90)
# loop through each pair of locations and labels
for loc, label in zip(locs, labels):
# get the text property for the label to get the correct count
count = cat_counts[label.get_text()]
pct_string = '{:0.1f}%'.format(100*count/n_points)
# print the annotation just below the top of the bar
plt.text(loc, count-8, pct_string, ha = 'center', color = 'w')
###Output
_____no_output_____
###Markdown
Use the `.get_text()` method to obtain the category name, so I can get the count of each category level. At the end, I use the text function to print each percentage, with the x-position, y-position, and string as the three main parameters to the function. Counting Missing Data We can use pandas functions to create a table with the number of missing values in each column.
###Code
pokemon.isna().sum()
###Output
_____no_output_____
###Markdown
Seaborn's barplot function is built to depict a summary of one quantitative variable against levels of a second, qualitative variable, but can be used here.
###Code
na_counts = pokemon.isna().sum()
base_color = sb.color_palette()[0]
sb.barplot(na_counts.index.values, na_counts, color = base_color)
plt.xticks(rotation = 90);
###Output
_____no_output_____
###Markdown
The first argument to the function contains the x-values (column names), the second argument the y-values (our counts). Pie ChartsPie charts are a fairly limited plot type in the range of scenario follow certain guidelines: - Make sure that your interest is in **relative frequencies** . Areas should represent parts of a whole, rather than measurements on a second variable (unless that second variable can logically be summed up into some whole). - Limit the number of slices plotted. A pie chart works best with two or three slices, though it's also possible to plot with four or five slices as long as the wedge sizes can be distinguished. If you have a lot of categories, or categories that have small proportional representation, consider grouping them together so that fewer wedges are plotted, or use an 'Other' category to handle them. - Plot the data systematically. One typical method of plotting a pie chart is to start from the top of the circle, then plot each categorical level clockwise from most frequent to least frequent. If you have three categories and are interested in the comparison of two of them, a common plotting method is to place the two categories of interest on either side of the 12 o'clock direction, with the third category filling in the remaining space at the bottom. Pie plot
###Code
sorted_counts = pokemon['generation_id'].value_counts()
plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False)
plt.axis('square')
###Output
_____no_output_____
###Markdown
Donut Plot
###Code
sorted_counts = pokemon['generation_id'].value_counts()
plt.pie(sorted_counts, labels = sorted_counts.index, startangle = 90, counterclock = False, wedgeprops = {'width': 0.4})
plt.axis('square')
###Output
_____no_output_____
###Markdown
HistogramsA histogram is used to plot the distribution of a numeric variable. It's the quantitative version of the bar chart. However, rather than plot one bar for each unique numeric value, values are grouped into continuous bins, and one bar for each bin is plotted depicting the number
###Code
plt.hist(data = pokemon, x = 'speed')
###Output
_____no_output_____
###Markdown
By default, the hist function divides the data into 10 bins, based on the range of values taken. In almost every case, we will want to change these settings. Use of descriptive statistics (e.g. via ```df['num_var'].describe()```) to gauge what minimum and maximum bin limits might be appropriate for the plot. These bin edges can be set using numpy's arange function.
###Code
bin_edges = np.arange(0, pokemon['speed'].max()+1, 1)
plt.hist(data = pokemon, x = 'speed', bins = bin_edges)
###Output
_____no_output_____
###Markdown
When creating histograms, it's useful to play around with different bin widths to see what represents the data best. Too many bins, and you may see too much noise that interferes with identification of the underlying signal. Too few bins, and you may not be able to see the true signal in the first place.
###Code
plt.figure(figsize = [10, 5]) # larger figure size for subplots
# histogram on left, example of too-large bin size
plt.subplot(1, 2, 1) # 1 row, 2 cols, subplot 1
bin_edges = np.arange(0, pokemon['speed'].max()+4, 4)
plt.hist(data = pokemon, x = 'speed', bins = bin_edges);
# histogram on right, example of too-small bin size
plt.subplot(1, 2, 2) # 1 row, 2 cols, subplot 2
bin_edges = np.arange(0, pokemon['speed'].max()+1/4, 1/4)
plt.hist(data = pokemon, x = 'speed', bins = bin_edges);
###Output
_____no_output_____
###Markdown
Alternative ApproachThe seaborn function distplot can also be used to plot a histogram, and is integrated with other univariate plotting functions.
###Code
sb.distplot(pokemon['speed']);
bin_edges = np.arange(0, pokemon['speed'].max()+1, 1)
sb.distplot(pokemon['speed'], bins = bin_edges, kde = False,
hist_kws = {'alpha' : 1})
###Output
_____no_output_____
###Markdown
Figures, Axes, and SubplotsThe base of a visualization in matplotlib is a Figure object. Contained within each Figure will be one or more Axes objects, each Axes object containing a number of other elements that represent each plot. In the earliest examples, these objects have been created implicitly
###Code
plt.hist(data = pokemon, x = 'speed')
###Output
_____no_output_____
###Markdown
One alternative way we could have created the histogram is to explicitly set up the Figure and Axes.
###Code
fig = plt.figure()
ax = fig.add_axes([.125, .125, .775, .755])
ax.hist(data = pokemon, x = 'speed')
###Output
_____no_output_____
###Markdown
To use Axes objects with seaborn, seaborn functions usually have an "ax" parameter to specify upon which Axes a plot will be drawn.
###Code
fig = plt.figure()
ax = fig.add_axes([.125, .125, .775, .755])
base_color = sb.color_palette()[0]
sb.countplot(data = pokemon, x='speed', color = base_color, ax = ax)
plt.figure(figsize = [10, 5]) # larger figure size for subplots
# example of somewhat too-large bin size
plt.subplot(1, 2, 1) # 1 row, 2 cols, subplot 1
bin_edges = np.arange(0, pokemon['speed'].max()+4, 4)
plt.hist(data = pokemon, x = 'speed', bins = bin_edges)
# example of somewhat too-small bin size
#plt.subplot(1, 2, 2) # 1 row, 2 cols, subplot 2
bin_edges = np.arange(0, pokemon['speed'].max()+1/4, 1/4)
plt.hist(data = pokemon, x = 'speed', bins = bin_edges);
###Output
_____no_output_____
###Markdown
Aditional techniquesIf you don't assign Axes objects as they're created, you can retrieve the current Axes using ax = plt.gca(), or you can get a list of all Axes in a Figure fig by using axes = fig.get_axes(). As for creating subplots, you can use fig.add_subplot() in the same way as plt.subplot() above. If you already know that you're going to be creating a bunch of subplots, you can use the plt.subplots() function:
###Code
fig, axes = plt.subplots(3, 4) # grid of 3x4 subplots
axes = axes.flatten() # reshape from 3x4 array into 12-element vector
for i in range(12):
plt.sca(axes[i]) #set the current Axes
plt.text(0.5, 0.5, i + 1) # print conventional subplot index number to middle of Axes
###Output
_____no_output_____
###Markdown
Descriptive Statistics, Outliers, and Axis LimitsNote any aspects of the data like number of modes and skew, and note the presence of outliers in the data for further investigation, you might need to change the limits or scale of what is plotted to take a closer look at the underlying patterns in the data.
###Code
bins = np.arange(0, pokemon['height'].max() + 0.5, 0.5)
plt.hist(data = pokemon, x = 'height', bins = bins);
bins = np.arange(0, pokemon['height'].max() + 0.2, 0.2)
plt.hist(data = pokemon, x = 'height', bins = bins);
plt.xlim(0,6);
###Output
_____no_output_____
###Markdown
Scales and Transformation Certain data distributions will find themselves amenable to scale transformations. The most common example of this is data that follows an approximately log-normal distribution. This is data that, in their natural units, can look highly skewed: lots of points with low values, with a very long tail of data points with large values.
###Code
plt.figure(figsize = [10, 5])
# left histogram: data plotted in natural units
plt.subplot(1,2,1)
bin_edges = np.arange(0, pokemon['weight'].max()+100, 100)
plt.hist(data = pokemon, x = 'weight', bins = bin_edges)
plt.xlabel('values')
# right histogram: data plotted adter direct log transformation
plt.subplot(1, 2, 2)
log_data = np.log10(pokemon['weight']) # direct data transform
log_bin_edges = np.arange(0.8, log_data.max()+0.1, 0.1)
plt.hist(log_data, bins = log_bin_edges)
plt.xlabel('log(values)')
###Output
_____no_output_____
###Markdown
In the plot on the left, the few data points with value above 1000 mash the majority of the points into the bins on the far left. With the plot on the right, the logarithmic transform makes those large points look in line with the rest: a raw value of 1000 becomes a value of 3 under log transform, and a raw value of 100 becomes a log-transformed value of 2 This is where scale transformations are handy. In a scale transformation, the gaps between values are based on the transformed scale, but you can interpret data in the variable's natural units.
###Code
bin_edges = np.arange(0, pokemon['weight'].max() + 100, 100)
plt.hist(pokemon['weight'], bins = bin_edges)
plt.xscale('log')
###Output
_____no_output_____
###Markdown
Notice two things about the plot now. Even though the data is on a log scale, the bins are still linearly spaced. This means that they change size from wide on the left to thin on the right, as the values increase multiplicatively. Secondly, the default label settings are still somewhat tricky to interpret, and are sparse as well. To address the bin size issue, we just need to change them so that they are evenly-spaced powers of 10.
###Code
bin_edges = 10 ** np.arange(0.8, np.log10(pokemon['weight'].max()+0.1), 0.1)
plt.hist(pokemon['weight'], bins = bin_edges)
plt.xscale('log')
tick_locs = [10, 30, 100, 300, 1000, 3000]
plt.xticks(tick_locs, tick_locs)
###Output
_____no_output_____
###Markdown
the transformation implies that additive steps on the log scale will result in multiplicative changes in the natural scale, an important implication when it comes to data modeling.
###Code
def sqrt_trans(x, inverse = False):
""" transformatio helper function """
if not inverse:
return np.sqrt(x)
else:
return x ** 2
bin_edges = np.arange(0, sqrt_trans(pokemon['weight'].max())+1, 1)
plt.hist(pokemon['weight'].apply(sqrt_trans), bins = bin_edges)
tick_locs = np.arange(0, sqrt_trans(pokemon['weight'].max())+10, 10)
plt.xticks(tick_locs, sqrt_trans(tick_locs, inverse = True).astype(int))
###Output
_____no_output_____
###Markdown
Kernel Density Estimation Kernel density estimation is one way of estimating the probability density function of a variable. In a KDE plot, you can think of each observation as replaced by a small ‘lump’ of area. Stacking these lumps all together produces the final density curve. The default settings use a normal-distribution kernel.
###Code
sb.distplot(pokemon['height'])
###Output
_____no_output_____
###Markdown
Seaborn's distplot function calls another function, kdeplot, to generate the KDE. The demonstration code below also uses a third function called by distplot for illustration, rugplot. In a rugplot, data points are depicted as dashes on a number line.
###Code
data = [0.0, 3.0, 4.5, 8.0]
plt.figure(figsize = [12, 5])
# left plot: showing kde lumps with the default settings
plt.subplot(1, 3, 1)
sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'})
# central plot: kde with narrow bandwidth to show individual probability lumps
plt.subplot(1, 3, 2)
sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'},
kde_kws = {'bw' : 1})
# right plot: choosing a different, triangular kernel function (lump shape)
plt.subplot(1, 3, 3)
sb.distplot(data, hist = False, rug = True, rug_kws = {'color' : 'r'},
kde_kws = {'bw' : 1.5, 'kernel' : 'tri'})
###Output
_____no_output_____
###Markdown
Waffle Plots One alternative univariate plot type that you might see for categorical data is the waffle plot, also known as the square pie chart. While the standard pie chart uses a circle to represent the whole, a waffle plot is plotted onto a square divided into a 10x10 grid. Each small square in the grid represents one percent of the data, and a number of squares are colored by category to indicate total proportions. Compared to a pie chart, it is much easier to make precise assessments of relative frequencies.
###Code
def percentage_blocks(df, var):
"""
Take as input a dataframe and variable, and return a Pandas series with
approximate percentage values for filling out a waffle plot.
"""
# compute base quotas
percentages = 100 * df[var].value_counts() / df.shape[0]
counts = np.floor(percentages).astype(int) # integer part = minimum quota
decimal = (percentages - counts).sort_values(ascending = False)
# add in additional counts to reach 100
rem = 100 - counts.sum()
for cat in decimal.index[:rem]:
counts[cat] += 1
return counts
pokemon['generation_id'].value_counts()
pokemon['generation_id'].value_counts() / pokemon.shape[0]
percentage_blocks(pokemon, 'generation_id')
###Output
_____no_output_____
###Markdown
To **plot** those counts as boxex in the waffle plot form, use the __bar__ function
###Code
waffle_counts = percentage_blocks(pokemon, 'generation_id')
for cat in range(waffle_counts.shape[0]):
print(cat)
waffle_counts = percentage_blocks(pokemon, 'generation_id')
prev_count = 0
# for each category,
for cat in range(waffle_counts.shape[0]):
# get the block indices
blocks = np.arange(prev_count, prev_count + waffle_counts[cat])
# and put a block at each index's location
x = blocks % 10 # use mod operation to get ones digit
y = blocks // 10 # use floor division to get tens digit
plt.bar(x = x, height = 0.8, width = 0.8, bottom = y)
prev_count += waffle_counts[cat]
waffle_counts = percentage_blocks(pokemon, 'generation_id')
prev_count = 0
# for each category,
for cat in range(waffle_counts.shape[0]):
# get the block indices
blocks = np.arange(prev_count, prev_count + waffle_counts[cat])
# and put a block at each index's location
x = blocks % 10 # use mod operation to get ones digit
y = blocks // 10 # use floor division to get tens digit
plt.bar(x = x, height = 0.8, width = 0.8, bottom = y)
prev_count += waffle_counts[cat]
# aesthetic wrangling
plt.legend(waffle_counts.index, bbox_to_anchor = (1, 0.5), loc = 6)
plt.axis('off')
plt.axis('square')
# each box represents seven full counts
waffle_counts = (pokemon['generation_id'].value_counts() / 5).astype(int)
prev_count = 0
# for each category,
for cat in range(waffle_counts.shape[0]):
# get the block indices
blocks = np.arange(prev_count, prev_count + waffle_counts[cat])
# and put a block at each index's location
x = blocks % 10
y = blocks // 10
plt.bar(y, 0.8, 0.8, x)
prev_count += waffle_counts[cat]
# box size legend
plt.bar(7.5, 0.8, 0.8, 2, color = 'white', edgecolor = 'black', lw = 2)
plt.text(8.1, 2.4,'= 7 data points', va = 'center')
# aesthetic wrangling
plt.legend(waffle_counts.index, bbox_to_anchor = (0.8, 0.5), loc = 1)
plt.axis('off')
plt.axis('square')
###Output
_____no_output_____ |
03_numpy/numpy.ipynb | ###Markdown
USM Numérica Librería numpy Objetivos1. Conocer la librería numpy y su aplicación para cálculo numérico.2. Aprender las diferencias de utilización entre numpy.matrix y numpy.array. 0.1 InstruccionesLas instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente [link](link).Después de descargar y abrir el presente notebook, recuerden:* Desarrollar los problemas de manera secuencial.* Guardar constantemente con *`Ctr-S`* para evitar sorpresas.* Reemplazar en las celdas de código donde diga *`FIX_ME`* por el código correspondiente.* Ejecutar cada celda de código utilizando *`Ctr-Enter`* 0.2 Licenciamiento y ConfiguraciónEjecutar la siguiente celda mediante *`Ctr-S`*.
###Code
"""
IPython Notebook v4.0 para python 3.0
Librerías adicionales: numpy, matplotlib
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
###Output
_____no_output_____
###Markdown
Contenido1. Overview de Numpy y Scipy2. Librería Numpy * Arreglos vs Matrices * Axis * Funciones basicas. * Input y Output3. Tips 1. Overview de numpy y scipy¿Cual es la diferencia entre numpy y scipy?*In an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.*[Link stackoverflow](http://stackoverflow.com/questions/10766082/why-do-numpy-and-scipy-have-a-lot-of-the-same-functions-which-should-i-prefer/1076764410767644)Por ser python un lenguaje open-source, existen miles de paquetes disponibles creados por individuos o comunidades. Éstos pueden estar disponibles en un repositorio como github o bitbucket, o bien estar disponibles en el repositorio oficial de python: [pypi](https://pypi.python.org/pypi). En un inicio, cuando no existía una librerías de cálculo científico oficial, varios candidatos proponían soluciones:* **numpy**: tenía una excelente representación de vectores, matrices y arreglos, implementados en C y llamados fácilmente desde python* **scipy**: proponía linkear a librerías ya elaboradas de calculo científico de alto rendimiento en C o fortran, permitiendo ejecutar rápidamente desde python.Ambos projectos fueron creciendo en complejidad y alcance, y en vez de competir, decidieron dividir tareas y unificar fuerzas para proponer una plataforma de cálculo científico que reemplazara completamente otros programas.* **numpy**: Corresponde a lo relacionado con la estructura de los datos (arrays densos y sparse, matrices, constructores especiales, lectura de datos regulares, etc.), pero no las operaciones en sí. Por razones históricas y de compatibilidad, tiene algunos algoritmos, pero en realidad resulta más consistente utilizar los algoritmos de scipy.* **scipy**: Corresponde a la implementación numérica de diversos algoritmos de corte científicos: algebra lineal, estadística, ecuaciones diferenciales ordinarias, interpolacion, integracion, optimización, análisis de señales, entre otros. OBSERVACIÓN IMPORTANTE:Las matrices y arrays de numpy deben contener variables con el mismo tipo de datos: sólo enteros, sólo flotantes, sólo complejos, sólo booleanos o sólo strings. La uniformicidad de los datos es lo que permite acelerar los cálculos con implementaciones en C a bajo nivel. 2. Librería NumpySiempre importaremos la librería numpy de la siguiente forma: import numpy as np Todas las funciones y módulos de numpy quedan a nuestro alcance a 3 carácteres de distancia: np.array([1,4,9,16]) np.linspace(0.,1.,100) Evite a todo costo utilizar lo siguiente: from numpy import *
###Code
import numpy as np
print np.version.version # Si alguna vez tienen problemas, verifiquen su version de numpy
###Output
_____no_output_____
###Markdown
Importante**Ipython notebook** es interactivo y permite la utilización de tabulación para ofrecer sugerencias o enseñar ayuda (no solo para numpy, sino que para cualquier código en python).Pruebe los siguientes ejemplos:
###Code
# Presionar tabulacción con el cursor despues de np.arr
np.arr
# Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando "?"
np.array?
# Presionar Ctr-Enter
%who
x = 10
%who
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.1 Array vs MatrixPor defecto, la gran mayoria de las funciones de numpy y de scipy asumen que se les pasará un objeto de tipo **array**. Veremos las diferencias entre los objetos array y matrix, pero recuerden utilizar array mientras sea posible. MatrixUna matrix de numpy se comporta exactamente como esperaríamos de una matriz: Pros:* Multiplicación utiliza el signo * como es esperable.* Resulta natural si lo único que haremos es algebra lineal. Contras:* Todas las matrices deben estar completamente alineadas para poder operar correctamente.* Operaciones elementwise son mas dificiles de definir/acceder.* Están exclusivamente definidas en 2D: un vector fila o un vector columna siguen siendo 2D.
###Code
# Operaciones con np.matrix
A = np.matrix([[1,2],[3,4]])
B = np.matrix([[1, 1],[0,1]], dtype=float)
x = np.matrix([[1],[2]])
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "A*B =\n", A*B
print "A*x =\n", A*x
print "A*A = A^2 =\n", A**2
print "x.T*A =\n", x.T * A
###Output
_____no_output_____
###Markdown
2.1 Array vs Matrix ArrayUn array de numpy es simplemente un "contenedor" multidimensional. Pros:* Es multidimensional: 1D, 2D, 3D, ...* Resulta consistente: todas las operaciones son element-wise a menos que se utilice una función específica. Contras:* Multiplicación maticial utiliza la función **dot()**
###Code
# Operaciones con np.matrix
A = np.array([[1,2],[3,4]])
B = np.array([[1, 1],[0,1]], dtype=float)
x = np.array([1,2]) # No hay necesidad de definir como fila o columna!
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "AoB = (multiplicacion elementwise) \n", A*B
print "A*B = (multiplicacion matricial, v1) \n", np.dot(A,B)
print "A*B = (multiplicacion matricial, v2) \n", A.dot(B)
print "A*A = A^2 = (potencia matricial)\n", np.linalg.matrix_power(A,2)
print "AoA = (potencia elementwise)\n", A**2
print "A*x =\n", np.dot(A,x)
print "x.T*A =\n", np.dot(x,A) # No es necesario transponer.
###Output
_____no_output_____
###Markdown
Desafío 1: matrix vs arraySean$$ A = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1\end{pmatrix}$$y $$ B = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1\end{pmatrix}$$1. Cree las matrices utilizando np.matrix y multipliquelas en el sentido matricial. Imprima el resultado.2. Cree las matrices utilizando np.array y multipliquelas en el sentido matricial. Imprima el resultado.
###Code
# 1: Utilizando matrix
A = np.matrix([]) # FIX ME
B = np.matrix([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
# 2: Utilizando arrays
A = np.array([]) # FIX ME
B = np.array([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
###Output
_____no_output_____
###Markdown
2.2 Indexación y SlicingLos arrays se indexan de la forma "tradicional".* Para un array ***unidimensional***: sólo tiene una indexacion. ¡No es ni fila ni columna!* Para un array ***bidimensional***: primera componente son las filas, segunda componente son las columnas. Notación respeta por tanto la convención tradicional de matrices.* Para un array ***tridimensional***: primera componente son las filas, segunda componente son las columnas, tercera componente la siguiente dimension.Respecto a los índices de los elementos, éstos comienzan en cero, como en C. Además, es posible utilizar índices negativos, que como convención asignan -1 al último elemento, al -2 el penúltimo elemento, y así sucesivamente.Por ejemplo, si a = [2,3,5,7,11,13,17,19], entonces a[0] es el valor 2 y a[1] es el valor 3, mientras que a[-1] es el valor 19 y a[-2] es el valor 17.Ademas, en python existe la "slicing notation":* **a[start:end]** : items desde índice **start** hasta **end-1*** **a[start:]** : items desde índice start hasta el final del array* **a[:end]** : items desde el inicio hasta el índice end-1* **a[:]** : todos los items del array (una copia nueva)* **a[start:end:step]** : items desde start hasta pasar end (sin incluir) con paso step
###Code
x = np.arange(9) # "Vector" con valores del 0 al 8
print "x = ", x
print "x[:] = ", x[:]
print "x[5:] = ", x[5:]
print "x[:8] = ", x[:8]
print "x[:-1] = ", x[:-1]
print "x[1:-1] = ", x[1:-1]
print "x[1:-1:2] = ", x[1:-1:2]
A = x.reshape(3,3) # Arreglo con valores del 0 al 8, en 3 filas y 3 columnas.
print "\n"
print "A = \n", A
print "primera fila de A\n", A[0,:]
print "ultima columna de A\n", A[:,-1]
print "submatriz de A\n", A[:2,:2]
###Output
_____no_output_____
###Markdown
Observación1. Cabe destacar que al tomar slices (subsecciones) de un arreglo obtenemos siempre un arreglo con menores dimensiones que el original.2. Esta notación es extremadamente conveniente, puesto que nos permite manipular el array sin necesitar conocer el tamaño del array y escribir de manera compacta las fórmulas numéricas.Por ejemplo, implementar una derivada numérica es tan simple como sigue.
###Code
def f(x):
return 1 + x**2
x = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # O utilizar np.linspace!
y = f(x) # Tan facil como llamar f sobre x
dydx = ( y[1:] - y[:-1] ) / ( x[1:] - x[:-1] )
x_aux = 0.5*(x[1:] + x[:-1])
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, '-s', label="f")
plt.plot(x_aux, dydx, '-s', label="df/dx")
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Desafío 2: Derivación numéricaImplemente el cálculo de la segunda derivada, que puede obtenerse por diferencias finitas centradas mediante$$ \frac{d f(x_i)}{dx} = \frac{1}{\Delta x^2} \Big( f(x_{i+1}) -2 f(x_{i}) + f(x_{i-1}) \Big)$$
###Code
def g(x):
return 1 + x**2 + np.sin(x)
x = np.linspace(0,1,10)
y = g(x)
d2ydx2 = 0 * x # FIX ME
x_aux = 0*d2ydx2 # FIX ME
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, label="f")
plt.plot(x_aux, d2ydx2, label="d2f/dx2")
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.2 Funciones BásicasAlgunas funciones básicas que es conveniente conocer son las siguientes:* **shape**: Entrega las dimensiones del arreglo. Siempre es una tupla.* **len**: Entrega el número de elementos de la primera dimensión del arreglo. Siempre es un entero.* **ones**: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.* **zeros**: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.* **eye**: Crea un arreglo con las dimensiones provistas e inicializado con 1 en la diagonal. Por defecto array 2D.
###Code
# arrays 1d
A = np.ones(3)
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros(3)
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(1,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# Si queremos forzar la misma forma que A y B
C = np.eye(1,3).flatten() # o np.eye(1,3)[0,:]
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# square arrays
A = np.ones((3,3))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((3,3))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(3) # Or np.eye(3,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# fat 2d array
A = np.ones((2,5))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((2,5))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(2,5)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.2 Funciones BásicasAlgunas funciones básicas que es conveniente conocer son las siguientes:* **reshape**: Convierte arreglo a nueva forma. Numero de elementos debe ser el mismo.* **linspace**: Regresa un arreglo con valores linealmente espaciados.* **diag(x)**: Si x es 1D, regresa array 2D con valores en diagonal. Si x es 2D, regresa valores en la diagonal.* **sum**: Suma los valores del arreglo. Puede hacerse en general o a lo largo de un axis.* **mean**: Calcula el promedio de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.* **std**: Calcula la desviación estándar de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
###Code
x = np.linspace(0., 1., 6)
A = x.reshape(3,2)
print "x = \n", x
print "A = \n", A
print "np.diag(x) = \n", np.diag(x)
print "np.diag(B) = \n", np.diag(A)
print ""
print "A.sum() = ", A.sum()
print "A.sum(axis=0) = ", A.sum(axis=0)
print "A.sum(axis=1) = ", A.sum(axis=1)
print ""
print "A.mean() = ", A.mean()
print "A.mean(axis=0) = ", A.mean(axis=0)
print "A.mean(axis=1) = ", A.mean(axis=1)
print ""
print "A.std() = ", A.std()
print "A.std(axis=0) = ", A.std(axis=0)
print "A.std(axis=1) = ", A.std(axis=1)
###Output
_____no_output_____
###Markdown
Desafío 3Complete el siguiente código:* Se le provee un array A cuadrado* Calcule un array B como la multiplicación element-wise de A por sí misma.* Calcule un array C como la multiplicación matricial de A y B.* Imprima la matriz C resultante.* Calcule la suma, promedio y desviación estándar de los valores en la diagonal de C.* Imprima los valores anteriormente calculados.
###Code
A = np.outer(np.arange(3),np.arange(3))
print A
# FIX ME
# FIX ME
# FIX ME
# FIX ME
# FIX ME
###Output
_____no_output_____
###Markdown
Desafío 4Implemente la regla de [integración trapezoidal](https://en.wikipedia.org/wiki/Trapezoidal_rule)
###Code
def mi_funcion(x):
f = 1 + x + x**3 + x**5 + np.sin(x)
return f
N = 5
x = np.linspace(-1,1,N)
y = mi_funcion(x)
# FIX ME
I = 0 # FIX ME
# FIX ME
print "Area bajo la curva: %.3f" %I
# Ilustración gráfica
x_aux = np.linspace(x.min(),x.max(),N**2)
fig = plt.figure(figsize=(12,8))
fig.gca().fill_between(x, 0, y, alpha=0.25)
plt.plot(x_aux, mi_funcion(x_aux), 'k')
plt.plot(x, y, 'r.-')
plt.show()
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.5 Inputs y OutputsNumpy permite leer datos en formato array con la función **loadtxt**. Existen variados argumentos opcionales, pero los mas importantes son:* **skiprows**: permite saltarse lineas en la lectura.* **dtype**: declarar que tipo de dato tendra el array resultante
###Code
# Ejemplo de lectura de datos
data = np.loadtxt("data/cherry.txt")
print data.shape
print data
# Ejemplo de lectura de datos, saltandose 11 lineas y truncando a enteros
data_int = np.loadtxt("data/cherry.txt", skiprows=11).astype(int)
print data_int.shape
print data_int
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.5 Inputs y OutputsNumpy permite guardar datos de manera sencilla con la función **savetxt**: siempre debemos dar el nombre del archivo y el array a guardar. Existen variados argumentos opcionales, pero los mas importantes son:* **header**: Línea a escribir como encabezado de los datos* **fmt**: Formato con el cual se guardan los datos (%d para enteros, %.5f para flotantes con 5 decimales, %.3E para notación científica con 3 decimales, etc.).
###Code
# Guardando el archivo con un header en español
encabezado = "Diametro Altura Volumen (Valores truncados a numeros enteros)"
np.savetxt("data/cherry_int.txt", data_int, fmt="%d", header=encabezado)
###Output
_____no_output_____
###Markdown
Revisemos si el archivo quedó bien escrito. Cambiaremos de **python** a **bash** para utilizar los comandos del terminal:
###Code
%%bash
cat data/cherry_int.txt
###Output
_____no_output_____
###Markdown
Desafío 5* Lea el archivo data/cherry.txt* Escale la matriz para tener todas las unidades en metros o metros cubicos.* Guarde la matriz en un nuevo archivo data/cherry_mks.txt, con un encabezado apropiado y 2 decimales de precisión para el flotante (pero no en notación científica).
###Code
# Leer datos
#FIX_ME#
# Convertir a mks
#FIX_ME#
# Guardar en nuevo archivo
#FIX_ME#
###Output
_____no_output_____
###Markdown
2. Librería Numpy 2.6 Selecciones de datosExisten 2 formas de seleccionar datos en un array A:* Utilizar máscaras de datos, que corresponden a un array con las mismas dimensiones del array A, pero de tipo booleano. Todos aquellos elementos True del array de la mascara serán seleccionados.* Utilizar un array con valores enteros. Los valores del array indican los valores que desean conservarse. 2.6 MáscarasObserve que el array regresado siempre es unidimensional puesto que no es posible garantizar que se mantenga la dimensión original del array.
###Code
x = np.linspace(0,42,10)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
mask_x_1 = x>10
print "mask_x_1 = ", mask_x_1
print "x[mask_x_1] = ", x[mask_x_1]
print "x[mask_x_1].shape = ", x[mask_x_1].shape
print "\n"
mask_x_2 = x > x.mean()
print "mask_x_2 = ", mask_x_2
print "x[mask_x_2] = ", x[mask_x_2]
print "x[mask_x_2].shape = ", x[mask_x_2].shape
A = np.linspace(10,20,12).reshape(3,4)
print "\n"
print "A = ", A
print "A.shape = ", A.shape
print "\n"
mask_A_1 = A>13
print "mask_A_1 = ", mask_A_1
print "A[mask_A_1] = ", A[mask_A_1]
print "A[mask_A_1].shape = ", A[mask_A_1].shape
print "\n"
mask_A_2 = A > 0.5*(A.min()+A.max())
print "mask_A_2 = ", mask_A_2
print "A[mask_A_2] = ", A[mask_A_2]
print "A[mask_A_2].shape = ", A[mask_A_2].shape
T = np.linspace(-100,100,24).reshape(2,3,4)
print "\n"
print "T = ", T
print "T.shape = ", T.shape
print "\n"
mask_T_1 = T>=0
print "mask_T_1 = ", mask_T_1
print "T[mask_T_1] = ", T[mask_T_1]
print "T[mask_T_1].shape = ", T[mask_T_1].shape
print "\n"
mask_T_2 = 1 - T + 2*T**2 < 0.1*T**3
print "mask_T_2 = ", mask_T_2
print "T[mask_T_2] = ", T[mask_T_2]
print "T[mask_T_2].shape = ", T[mask_T_2].shape
###Output
_____no_output_____
###Markdown
2.6 ÍndicesObserve que es posible repetir índices, por lo que el array obtenido puede tener más elementos que el array original.En un arreglo 2d, es necesario pasar 2 arrays, el primero para las filas y el segundo para las columnas.
###Code
x = np.linspace(10,20,11)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
ind_x_1 = np.array([1,2,3,5,7])
print "ind_x_1 = ", ind_x_1
print "x[ind_x_1] = ", x[ind_x_1]
print "x[ind_x_1].shape = ", x[ind_x_1].shape
print "\n"
ind_x_2 = np.array([0,0,1,2,3,4,5,6,7,-3,-2,-1,-1])
print "ind_x_2 = ", ind_x_2
print "x[ind_x_2] = ", x[ind_x_2]
print "x[ind_x_2].shape = ", x[ind_x_2].shape
A = np.linspace(-90,90,10).reshape(2,5)
print "A = ", A
print "A.shape = ", A.shape
print "\n"
ind_row_A_1 = np.array([0,0,0,1,1])
ind_col_A_1 = np.array([0,2,4,1,3])
print "ind_row_A_1 = ", ind_row_A_1
print "ind_col_A_1 = ", ind_col_A_1
print "A[ind_row_A_1,ind_col_A_1] = ", A[ind_row_A_1,ind_col_A_1]
print "A[ind_row_A_1,ind_col_A_1].shape = ", A[ind_row_A_1,ind_col_A_1].shape
print "\n"
ind_row_A_2 = 1
ind_col_A_2 = np.array([0,1,3])
print "ind_row_A_2 = ", ind_row_A_2
print "ind_col_A_2 = ", ind_col_A_2
print "A[ind_row_A_2,ind_col_A_2] = ", A[ind_row_A_2,ind_col_A_2]
print "A[ind_row_A_2,ind_col_A_2].shape = ", A[ind_row_A_2,ind_col_A_2].shape
###Output
_____no_output_____
###Markdown
Desafío 6La potencia de un aerogenerador, para $k$ una constante relacionada con la geometría y la eficiencia, $\rho$ la densidad del are, $r$ el radio del aerogenerador en metros y $v$ la velocidad el viento en metros por segundo, está dada por: $$ P = \begin{cases} k \ \rho \ r^2 \ v^3, 3 \leq v \leq 25\\ 0,\ eoc\end{cases}$$Típicamente se considera una valor de $k=0.8$ y una densidad del aire de $\rho = 1.2$ [$kg/m^3$].Calcule el número de aerogeneradores activos, la potencia promedio y la potencia total generada por los 11 generadores del parque Eólico Canela 1.Los valores de radio del aerogenerador (en metros) y la velocidad del viento (en kilometros por hora) se indican a continuación en arreglos en el código numérico.
###Code
import numpy as np
k = 0.8
rho = 1.2 #
r_m = np.array([ 25., 25., 25., 25., 25., 25., 20., 20., 20., 20., 20.])
v_kmh = np.array([10.4, 12.6, 9.7, 7.2, 12.3, 10.8, 12.9, 13.0, 8.6, 12.6, 11.2]) # En kilometros por hora
P = 0
n_activos = 0
P_mean = 0.0
P_total = 0.0
print "Existen %d aerogeneradores activos del total de %d" %(n_activos, r.shape[0])
print "La potencia promedio de los aeorgeneradores es {0:.2f} ".format(P_mean)
print "La potencia promedio de los aeorgeneradores es " + str(P_total)
###Output
_____no_output_____ |
tutorials/dacy-spacy-tutorial.ipynb | ###Markdown
Introduction to DaCy and SpaCy----Before we start we assume you have installed DaCy and SpaCy if not you can run the following:
###Code
!pip install git+https://github.com/KennethEnevoldsen/DaCy
###Output
_____no_output_____
###Markdown
----Let's start of by loading DaCy as well as the smallest of the two models:
###Code
import dacy
# to see available models
for model in dacy.models():
print(model)
# loading the smallest model
nlp = dacy.load("da_dacy_medium_tft-0.0.0")
###Output
da_dacy_medium_tft-0.0.0
da_dacy_large_tft-0.0.0
###Markdown
Examining the SpaCy's Classes
###Code
print(type(nlp))
doc = nlp("EU-landene Frankrig, Italien, Spanien og Tyskland har indgået vaccine-aftale med Rusland")
print(type(doc))
print(type(doc[0]))
# what can we do with the token class
print(dir(doc[0]))
# Extracting things from the document and the token class.
for token in doc:
print(f"{token}: \n\tPOS-tag: {token.tag_}, \n\tNER: {token.ent_type_} - {token.ent_type_}")
# Why the underscore '_'? Hint: Efficient data structures
# you can also extract things directly from the document class:
doc.ents
###Output
EU-landene:
POS-tag: NOUN,
NER: MISC - MISC
Frankrig:
POS-tag: PROPN,
NER: LOC - LOC
,:
POS-tag: PUNCT,
NER: -
Italien:
POS-tag: PROPN,
NER: LOC - LOC
,:
POS-tag: PUNCT,
NER: -
Spanien:
POS-tag: PROPN,
NER: LOC - LOC
og:
POS-tag: CCONJ,
NER: -
Tyskland:
POS-tag: PROPN,
NER: LOC - LOC
har:
POS-tag: AUX,
NER: -
indgået:
POS-tag: VERB,
NER: -
vaccine-aftale:
POS-tag: NOUN,
NER: -
med:
POS-tag: ADP,
NER: -
Rusland:
POS-tag: PROPN,
NER: LOC - LOC
###Markdown
Visualization of Predictions
###Code
from spacy import displacy
displacy.render(doc, style="ent")
displacy.render(doc, style="dep")
###Output
_____no_output_____
###Markdown
Expanding SpaCy---We will now briefly examine how to expand upon SpaCy for our own goals. We will do two things.- 1) Add a readability measure, and- 2) a NER and dependency based task using DaCyto do this we will first need some data. I will first need some data. For this we will use the speeches by Mette Frederiksen:
###Code
import pandas as pd
df = pd.read_csv("../data/speeches.csv")
speeches = df[df["person"] == "Mette Frederiksen"]["text"].tolist()
print(speeches[3][:300])
print("---")
print(speeches[5][:300])
# a nice bonus of using SpaCy is you get a lot of "free stuff"
doc = nlp(speeches[3][:300])
for sent in doc.sents:
print(sent)
###Output
Deres MajestætKære formand, overborgmester og borgmester.
Kære alle sammen.
Kære København, hovedstad af Danmark.
Kæmpe stort tillykke med i dag.
Så kom dagen.
Efter mere end næsten 10 år med byggerod.
Cityringen står klar.
Det største anlægsprojekt i København siden Christian den Fjerde.
Jeg har lige
###Markdown
Measuring readabilityIn Danish a simple measure of readability is LIX. it is a by no means the best, but it is a good heuristic.LIX is given as follows:$$LIX = \frac{O}{P} + \frac{L \cdot 100}{O}$$where;$O$: Number of words$P$: Number of full stops (I will use number of sentences instead)$L$: Number of long words (bigger than 6)
###Code
from spacy.tokens import Doc
O = len(doc)
P = len(list(doc.sents))
L = len([t for t in doc if len(t)>6])
LIX = O/P + L*100/O
LIX
###Output
_____no_output_____
###Markdown
We naturally don't want to run this every time we need it. Thus is might be ideal to add a getter.Why a getter and not a function? Well the getter is a function ;), but more than that the getter only runs the function when the variable is needed, which makes it very efficient for simple tasks such as this. If you want to add more explicit variables you might want to add a pipe instead.
###Code
# adding it to the doc:
def LIX_getter(doc):
"""
extract LIX
"""
O = len(doc)
P = len(list(doc.sents))
L = len([t for t in doc if len(t)>6])
LIX = O/P + L*100/O
return LIX
# testing it out on a doc
doc = nlp(speeches[0])
doc._.LIX
###Output
_____no_output_____
###Markdown
Using NER and Dependency ParsingTo start this of let us first look at what entities Mette Frederiksen describes in her speeches:
###Code
docs = nlp.pipe(speeches) # only use this for large amount of documents (not like this)
for doc in docs:
print(doc.ents)
###Output
(Poul Erik, godhavnsdrengene, Danmark, Sarah Smeds, Godhavn, Peter Sabroes, Danmark, Godhavn, Poul Erik, Arne, 70’erne, Danmark, Danmark, Godhavn, Danmarkshistorien, Danmark, Danmark, Danmarks, Danmark, Mette, Danmarks, godhavnsdrengene, Grønlands, Kim Kielsen, grønlandske, danske, Kim, Danmark, Danmark, Poul Erik Rasmussen, foreningen Godhavnsdrengene, Poul Erik, Poul Eriks, Sofie Gråbøl, Sebastian, Poul Erik, Danmark)
(Valbyparken, ungdommens folkemøde, Danmark, Afghanistan, afghanske, Kabul, Afghanistan, Syrien, Irak, ISIL, flyvevåbnet, søværnet, Irak, Irak, Danmark, Kosovo, Danmark, Danmark, Baltikum, baltiske, NATOs, russisk, Niels Juel, Middelhavet, Det Røde Hav, Indiske, franske, Charles de Gaulle, franskmændene, dansk, Mozambique, Ebola, Uganda, Tsunami, Beredskabsstyrelsen, Danmarks, Danmark, Danmarks, dansk, danskere, Per, 90’erne, Aalborg, PTSD, Danmarks)
(Danmark, Danmarks største fagforbund, Aalborg, Socialdemokratiet, Aalborg, 3F, Danmarks, 3F, s, Danmark, demokratiet, Danmark, danskerne, dansker, Arne, Arne, 3F, 3F., 3F, Per, Kristian Thulesen Dahl, borgerlige, dansk politik, Socialdemokratiet, Dansk Folkeparti, Socialdemokratiet, 3F, Danmark, Danmark, Anker, Arne, Arne, Danmark, Danmark, 3F, dansk, Danmark, Danmark, danskerne, AMU-centre, FN-topmødet, New York, danske, Danmarks, Danmark, Danske, danske, Danmark, Per)
(København, Danmark, Cityringen, København, Christian den, Christian, socialdemokrat, metroen, Cityringen, Nordhavn, København, 90’erne, metro, København, København, Danmark, Danmark, København, Frederiksberg)
(Dan Turèll, danskere, Danmark, dansk, oldboys, Danmark, Danmark, danskerne, Skat, amerikansk, Danmark, Danskerne, brexit, Danmark, demokratiet, Danmark, Danmark, Danmark, Danmark, Enhedslisten, SF, De Radikale, Folketinget, Folketinget, Danmark, Danmark, New Public Management, dansk, University College, PLC, Rigspolitiet, Fyns politi, Folketinget, venstre, sosu-assistent, demokrati, Socialdemokratiet, Folketinget, skattevæsenet, skattevæsenet, Christiansborg, Christiansborg, danskere, danske, Danmark, Danmark, Danmark, Mint, Sjælsmark, dansk, Danmark, Danmark, Danmarks, Anden Verdenskrig, Europa, Sønderjylland, Danmark, Tyskland, europæiske, Storbritannien, Europa, Europa, Europa, Middelhavet, Bruxelles, Danmark, dansk, Europapolitik, Arktis, Færøerne, Grønland, Færøernes, Grønlands, nordatlantiske, Folketinget, Færøerne, Bárður á, Nielsen, Færøerne, Grønland, Naalakkersuisut, Kim Kielsen, grønlandske, Grønland, Danmark, grønlandske, Grønlands, Danmarks, dansk, Danmark, Christiansborg, De Radikale, SF, Enhedslisten, Alternativet, De konservative, Dansk Folkeparti, Liberal Alliance, Dansk, danske, danske, Himmerland, Danmark, Danmark, Danmark, Danmark, Folketinget, Danmark, DANMARK)
(Danmark, Danmark, Mona, FOA-arbejdspladser, Kirkens Korshærs, Odense, SOSU-skoler, Vejle, Randers, Herlev, Silkeborg, Skive, København, Odense, sosu-uddannelserne, New Public Management, Plejecenter Ærtebjerghaven, Folketinget, Ældresagen, Folketinget, FOA, FOA, dansk)
(Sønderjylland, Danmark, Sønderjyderne, Første Verdenskrig, Danmark, Danmark, tyske, danske, danskere, Danmark, Danmark, dannebrogsflag, Aalborg, danske, Danmark, Marienborg, Danmark, Danmark, Danmark, Danmark, 90’erne, Danmarkshistorie, danskere, Danmark, København, Danmark, sønderjyderne, Danmark, danske, Danmark, Danmarks, dansk, dansk, Danmark)
(anden verdenskrig, holocaust, nazisternes, Auschwitz-Birkenau, Holocaust, jøder, Auschwitz-Birkenau, Auschwitz, europæiske, Yad Yashem, Anti-semitismen, holocaust, jødiske, Jerusalem, danske, holocaust, Jytte Israel, Robert Fischermann, Jytte, europæiske, jøder, danskere, danske, Sverige, Islands Brygge, tyske, franske, Roberts, tyskerne, Robert, Theresienstadt, Robert, Robert, nazisternes, Europa, Robert, nazisterne, nazisterne, jøde, Robert, Robert, Jytte, jøder, holocaust, antisemitismen, jøder, Danmark, Danmark, Danmark, danskere, danske, jøder, holocaust, jøde, dansker, danske, danske, Danmark, Europa)
(Danmark, demokrati, Sarah Hartvigsen Juncker)
(danskere, danskere, corona, Corona-virus, Italien, Europa, Italien, Danmark, Danmark, danskere, Danmark, Danmark, Danmark, DSB, intercity, Folketinget, danskere, Udenrigsministeriet, Norditalien, Iran, Kina, Sydkorea, Østrig, Danmark, Danmark, Danmark, corona, Danskerne, danskernes, Folketingets, Folketingets, Statsministeriet, danskerne, danskerne, Danmark)
(corona, Danmark, corona, Danmark, corona, Danmark, Danmark, danske, Folketingets, Danmark, corona, danske, Danmark, sundhedsministeren, Danmark, corona, arbejdsmarkedets parter, danske, Danmark, Danmark, Danmark, danskernes, Corona-udbruddet, Danmark, Danmark, corona-virus, Folketingets partier, danske, Folketingets, corona, corona, Danske, Danmark, corona, Danmark, corona, Europa, Danmark, Danmark, København, danske, Danmark, anden verdenskrig, Danmark, danske, Danmark, Danmark)
(Statsministeriet, corona, Danmark, danskere, Corona, Danmark, Corona-virus, italiener, amerikaner, dansker, Danmark, Danmark, Danmark, Danmark, Danmark, corona, corona, Coronaen, Danmark, Danmark, HF, danskerne, Folketinget, Venstre, Danmark, USA, amerikanere, danske, socialområdet, Danmark, Folketingets, Folketingets, Danmark, danskere, Danmark, Danmark, corona-tid, Danmark, Danmark, coronaen)
(Danmark, corona, Danmark, danskere, Danmarks, demokrati, Kastellet, Danmarks, Danmarks, Al Asad Air Base, Irak, danske, danske, irakiske, ISIL, danske, Irak, Europas, Frontex, Grækenland, NATO, s, Afghanistan, Estland, Afrikas, danske, Covid-19, Mali, dansk, FN’s mission, covid-19, FN, corona, franske, corona, Sahel-regionen, Europa, danskere, Danmark, Danmark)
(corona-smittetal, Storkøbenhavn, Storkøbenhavn, Hjørring, Ringsted, Aarhus, Odense, København, Hovedstadsregionen, Storkøbenhavn, Danmark, Seruminstituttet, corona, corona, Marienborg, Danmark, CO2-, Danmark, Power-to-X, Star Wars, Danmark, Danmark, dansk, Holland, Aalborg Portland, Aalborg Portland, Danmark, Danmark, verdensplan, Aalborg Portland, Aalborg Portland, Aalborg Portlands, Nordsøen, Folketingets, Danmarks, Folketinget, danske, pick the winners, Danmark, EU-kommissionen, EU, s, Danmark, Folketinget)
(Danmarks, Carlsberg, Fredericia, Kalundborg, Fyn, Storebælt, DI, Lars, DI, s, Danmark, DI-plan, Dansk, Nationalbanken, Dansk, danske, danske, Danmark, DI, corona, Danske, FN, Danmark, Danmark, EU, europæisk, Folketinget, Indiens, Modi, Indiens, Indien, Danmark, Indiens, Modi, Indien, Danmark, Danmark, Indien, Indien, Kina, Indien, Udenrigsministeriets, danske, danske, danske, dansk, corona, danske, danske, Danmark, corona, dansk, Danmark)
(Danmarks, Danmark, Danmark, coronaen, Europa, Danmark, corona, Europa, Danmark, Danmarks, Coronaen, Danmark, Folketinget, dansk, Folketinget, Danmarkshistoriens, dansk, Danmark, Folketinget, Danmark, Danmark, dansk, danske, Tårnby, socialdemokratisk, danske, Danmark, Danmark, 80’erne, danske, Folketinget, coronaen, corona, Danmark, coronaen, europæiske, Danmark, Indien, Indien, Kina, Indien, Indiens, Modi, Indien, Danmark, Indien, Danmark, danske, Danmarks, corona, danske, Danmarks, Danmark, Danmark, Motalavej, justitsministeren, boligministeren, Korsør, Danmark, Danmark, Danmark, Brønshøj, Gellerup, Gentofte, danske, danske, Brønshøj, København, Danmark, Motalavej, Danmark, europæisk, Middelhavet, sydeuropæiske, Danmark, Europa, Danmark, Europa, Arktis, Rusland, amerikansk, Arktis, USA, USA, Europa, Arktis, Arktis, Nordatlanten, Folketinget, Folketingets, Radikale Venstre, Enhedslisten, SF, Danmark, Danmark, Else, Folketinget, Corona, Social-, sosu, Rigspolitiet, dansk, Frederikssund, Fredericia, Viborg, Esbjerg, Christiansborg, 80’erne, Poul Schlüter, frikommuneforsøg, VK-regeringen, Det Radikale Venstre, danske, Helsingør, Helsingør, Danmarks, Helsingør, Rebild, Viborg, Middelfart, Holbæk, Langeland, Esbjerg, Folkeskolen, Esbjerg, Holbæk, Christiansborg, Folketinget, Folketinget, travbanen, Danmark, Danmarks, Danmark, SF, Radikale Venstre, Enhedslisten, EU's, finansministeren, Danmark, Coronaen, Danmarks, Danmark, Folketingets, Danmark, Mette Frederiksen, the Danish Parliament, Denmark, Denmark, Denmark, Europe, Denmark, Europe, reopen society, Denmark, Denmark, Corona, almost 7 percent, Danish economy, Denmark, Sectoral partnerships, Danish cooperation, who, Denmark, Denmark, Danish economy, Danish, Tårnby, Denmark, Denmark, Denmark, Danish, Even though, Denmark, Denmark, European Recovery Fund, India, Denmark, India, Denmark, Danish jobs, Denmark, Danish industrial businesses, Denmark, Denmark, Denmark, Korsør, several residents, Denmark, Denmark, social and healthcare, pedagogues, obdurate culture, Denmark, broad daylight, Denmark, Denmark, who, Brønshøj last week, Gellerup, Gentofte, Danish, Danish, Danish, Copenhagen against, Denmark, a train station, Denmark, Motalavej, Denmark, European, Europe, Denmark, EU, Denmark, Europe, Europe, Russian, Europe, North Atlantic area, Danish, Radikale Venstre, the Danish Social-Liberal Party, Enhedslisten, the Red-Green Alliance, SF, the Socialist People's, Denmark, porta cabin, Denmark, green areas, Danish National Police, local police, Danish National Police, Frederikssund, Fredericia, Viborg, Esbjerg, Christiansborg, Poul Schlüter, Danish, Helsingør, Helsingør, Denmark, Helsingør, Rebild, Viborg, Middelfart, Holbæk, Langeland, Esbjerg, the public school, municipal councils, nursing home unit, Esbjerg, Holbæk, Denmark, Christiansborg, green energy, Denmark, green research strategy, Denmark, new great wind turbine, green fuel, Denmark, Det Radikale Venstre, Enhedslisten, green transition, green economic, European, Green ambitions, social, Denmark, Denmark, Denmark, Denmark)
(Corona, dansk, Danmark, Danmark, Danmark, Danmark, danskerne, Europa, Danmark, Lizette, Danmark, Danish Crown, Danish Crown, NNF, Danmark, Socialdemokratiet, Arnes, FH, socialdemokratisk, danske)
(Anden Verdenskrig, Danmark, danskere, danskere, Danmark, Folketinget, Hal Koch, Demokrati, demokrati, Dansk, Danmark, Danmark, Folketinget, dansk, dansk, Danmark, danske, demokratiet, Danmark, dansker, Danmark, Haslev, Roskilde, Vordingborg, Maribo, Nakskov, Nykøbing, Kalundborg, Faaborg, Haderslev, Dronninglund, København, Jylland, Sjælland, Fyn, øerne, København, Jylland, Sjælland, Fyn, danskere, Lolland-Falster, Vestsjælland, Nordjylland, København, Danmark, spanske, Martin Andersen Nexø, Danmark, Danmark, Danmark, Danmark, Mette Frederiksen, new year, World War II, Denmark, chronic illness, Denmark, Public employees, Danish, Denmark, Denmark, greenhouse gas emissions, offshore wind from energy islands, new national parks, untouched forests, Danish agriculture, Danish businesses, Denmark, Danish, Denmark, Haslev, Roskilde, Vordingborg, Maribo, Nakskov, Nykøbing, Kalundborg, Faaborg, Haderslev, Dronninglund, Police stations, Copenhagen, Zealand, Copenhagen, new local, Jutland, Zealand, Lolland-Falster, West Zealand, North Jutland, Copenhagen, Denmark, Spanish flu, Martin Andersen Nexø, Denmark, Denmark exists, Denmark, Denmark)
###Markdown
Oh well, look at that `Danmark` seems quite popular. Well given that let us examine how Mette describes Denmark. Let's first make a simple example:
###Code
doc = nlp("velkommen til skønne Danmark")
displacy.render(doc)
###Output
_____no_output_____
###Markdown
Notice how DK is describes using the adjective *'skønne'* and that this captured by the parsing tag *amod*. This can be extracted quite easily as follows:
###Code
[t for t in doc[3].subtree if t.dep_ == "amod"] # doc[3] corresponds to Danmark
###Output
_____no_output_____
###Markdown
Similarly to before we can now add a method for doing this for all docs. Notice this function is only ever called when you extract the variable. This it is not really running before you need it.
###Code
def ent_desc_getter(doc, entity="danmark"):
"""
return words which describes the entity
assumes entity is length 1
"""
for ent in doc.ents:
if ent.text.lower() == entity:
out = [t for t in doc[ent.start].subtree if t.dep_ == "amod"]
if out:
for i in out:
yield i
Doc.set_extension("dk_desc", getter=ent_desc_getter)
# Testing it out on one speech
doc = nlp(speeches[0])
list(doc._.dk_desc)
# testing it out on all the speeches
docs = nlp.pipe(speeches)
for doc in docs:
print(list(doc._.dk_desc))
###Output
[hele]
[hele, hele]
[]
[grønnere]
[grønnere, hele, grønt]
[solidarisk, store]
[]
[]
[]
[hele]
[hele]
[Hele, mange]
[hele, alle, hele]
[grønt]
[]
[hele]
[]
[]
###Markdown
Naturally, one could extend this. One might wish to filter by the tag as well e.g. by only showing adjectives. Similarly, this approach does not catch even simple cases such as *"Danmark er det skønneste land"*. In which case you can either parse the tree further and/or use coreference resolution. This conludes the tutorial. If you wish to work more on Danish NLP and DaCy feel free to contribute to its development. How to Contribute---DaCy is by no means perfect and there is still some notable limitaitons:- Lemmatization: It currently uses a lookup table for lemmatization based on the training corpus, a more viable solution is to use the `lemmy` package for SpaCy v2 but it need to be updated.- POS-tags: Currently POS-tags are assigned to the `tag_` not the `pos_` label. This needs to be fixed- DaCy is trained on a fairly small training corpus, any data augmentation and/or increase in training will likely results in improved performance. - DaCy notable does not include a sentiment analysis component. There is multiple reasons for this, the primary being that DaNE is not tagged for sentiment and sentiment analysis still lacks a clear definition.If you make progress in any of these (or something else which you find relevant), please feel free to reach out.
###Code
def doc_to_dict_getter(doc, include=["token", "lemma", ...]):
# construct dict using a loop over include
out = {"token": [], "lemma": [], "pos": [], "ner": [], "dep": []}
for t in doc:
if "token" in out:
out["token"] = t.text
# ...
return out
###Output
_____no_output_____
###Markdown
Introduction to DaCy and SpaCy----Before we start we assume you have installed DaCy and SpaCy if not you can run the following:
###Code
!pip install -q git+https://github.com/KennethEnevoldsen/DaCy
###Output
_____no_output_____
###Markdown
----Let's start of by loading DaCy as well as the smallest of the two models:
###Code
import dacy
# to see available models
for model in dacy.models():
print(model)
# loading the smallest model
nlp = dacy.load("da_dacy_medium_tft-0.0.0")
###Output
da_dacy_medium_tft-0.0.0
da_dacy_large_tft-0.0.0
###Markdown
Examining the SpaCy's Classes
###Code
print(type(nlp))
doc = nlp("EU-landene Frankrig, Italien, Spanien og Tyskland har indgået vaccine-aftale med Rusland")
print(type(doc))
print(type(doc[0]))
# what can we do with the token class
print(dir(doc[0]))
# Extracting things from the document and the token class.
for token in doc:
print(f"{token}: \n\tPOS-tag: {token.tag_}, \n\tNER: {token.ent_type_} - {token.ent_type_}")
# Why the underscore '_'? Hint: Efficient data structures
# you can also extract things directly from the document class:
doc.ents
###Output
EU-landene:
POS-tag: NOUN,
NER: MISC - MISC
Frankrig:
POS-tag: PROPN,
NER: LOC - LOC
,:
POS-tag: PUNCT,
NER: -
Italien:
POS-tag: PROPN,
NER: LOC - LOC
,:
POS-tag: PUNCT,
NER: -
Spanien:
POS-tag: PROPN,
NER: LOC - LOC
og:
POS-tag: CCONJ,
NER: -
Tyskland:
POS-tag: PROPN,
NER: LOC - LOC
har:
POS-tag: AUX,
NER: -
indgået:
POS-tag: VERB,
NER: -
vaccine-aftale:
POS-tag: NOUN,
NER: -
med:
POS-tag: ADP,
NER: -
Rusland:
POS-tag: PROPN,
NER: LOC - LOC
###Markdown
Visualization of Predictions
###Code
from spacy import displacy
displacy.render(doc, style="ent")
displacy.render(doc, style="dep")
###Output
_____no_output_____
###Markdown
Expanding SpaCy---We will now briefly examine how to expand upon SpaCy for our own goals. We will do two things.- 1) Add a readability measure, and- 2) a NER and dependency based task using DaCyto do this we will first need some data. I will first need some data. For this we will use the speeches by Mette Frederiksen:
###Code
import pandas as pd
df = pd.read_csv("../data/speeches.csv")
speeches = df[df["person"] == "Mette Frederiksen"]["text"].tolist()
print(speeches[3][:300])
print("---")
print(speeches[5][:300])
# a nice bonus of using SpaCy is you get a lot of "free stuff"
doc = nlp(speeches[3][:300])
for sent in doc.sents:
print(sent)
###Output
Deres MajestætKære formand, overborgmester og borgmester.
Kære alle sammen.
Kære København, hovedstad af Danmark.
Kæmpe stort tillykke med i dag.
Så kom dagen.
Efter mere end næsten 10 år med byggerod.
Cityringen står klar.
Det største anlægsprojekt i København siden Christian den Fjerde.
Jeg har lige
###Markdown
Measuring readabilityIn Danish a simple measure of readability is LIX. it is a by no means the best, but it is a good heuristic.LIX is given as follows:$$LIX = \frac{O}{P} + \frac{L \cdot 100}{O}$$where;$O$: Number of words$P$: Number of full stops (I will use number of sentences instead)$L$: Number of long words (bigger than 6)
###Code
from spacy.tokens import Doc
O = len(doc)
P = len(list(doc.sents))
L = len([t for t in doc if len(t)>6])
LIX = O/P + L*100/O
LIX
###Output
_____no_output_____
###Markdown
We naturally don't want to run this every time we need it. Thus is might be ideal to add a getter.Why a getter and not a function? Well the getter is a function ;), but more than that the getter only runs the function when the variable is needed, which makes it very efficient for simple tasks such as this. If you want to add more explicit variables you might want to add a pipe instead.
###Code
# adding it to the doc:
def LIX_getter(doc):
"""
extract LIX
"""
O = len(doc)
P = len(list(doc.sents))
L = len([t for t in doc if len(t)>6])
LIX = O/P + L*100/O
return LIX
# testing it out on a doc
doc = nlp(speeches[0])
doc._.LIX
###Output
_____no_output_____
###Markdown
Using NER and Dependency ParsingTo start this of let us first look at what entities Mette Frederiksen describes in her speeches:
###Code
docs = nlp.pipe(speeches) # only use this for large amount of documents (not like this)
for doc in docs:
print(doc.ents)
###Output
(Poul Erik, godhavnsdrengene, Danmark, Sarah Smeds, Godhavn, Peter Sabroes, Danmark, Godhavn, Poul Erik, Arne, 70’erne, Danmark, Danmark, Godhavn, Danmarkshistorien, Danmark, Danmark, Danmarks, Danmark, Mette, Danmarks, godhavnsdrengene, Grønlands, Kim Kielsen, grønlandske, danske, Kim, Danmark, Danmark, Poul Erik Rasmussen, foreningen Godhavnsdrengene, Poul Erik, Poul Eriks, Sofie Gråbøl, Sebastian, Poul Erik, Danmark)
(Valbyparken, ungdommens folkemøde, Danmark, Afghanistan, afghanske, Kabul, Afghanistan, Syrien, Irak, ISIL, flyvevåbnet, søværnet, Irak, Irak, Danmark, Kosovo, Danmark, Danmark, Baltikum, baltiske, NATOs, russisk, Niels Juel, Middelhavet, Det Røde Hav, Indiske, franske, Charles de Gaulle, franskmændene, dansk, Mozambique, Ebola, Uganda, Tsunami, Beredskabsstyrelsen, Danmarks, Danmark, Danmarks, dansk, danskere, Per, 90’erne, Aalborg, PTSD, Danmarks)
(Danmark, Danmarks største fagforbund, Aalborg, Socialdemokratiet, Aalborg, 3F, Danmarks, 3F, s, Danmark, demokratiet, Danmark, danskerne, dansker, Arne, Arne, 3F, 3F., 3F, Per, Kristian Thulesen Dahl, borgerlige, dansk politik, Socialdemokratiet, Dansk Folkeparti, Socialdemokratiet, 3F, Danmark, Danmark, Anker, Arne, Arne, Danmark, Danmark, 3F, dansk, Danmark, Danmark, danskerne, AMU-centre, FN-topmødet, New York, danske, Danmarks, Danmark, Danske, danske, Danmark, Per)
(København, Danmark, Cityringen, København, Christian den, Christian, socialdemokrat, metroen, Cityringen, Nordhavn, København, 90’erne, metro, København, København, Danmark, Danmark, København, Frederiksberg)
(Dan Turèll, danskere, Danmark, dansk, oldboys, Danmark, Danmark, danskerne, Skat, amerikansk, Danmark, Danskerne, brexit, Danmark, demokratiet, Danmark, Danmark, Danmark, Danmark, Enhedslisten, SF, De Radikale, Folketinget, Folketinget, Danmark, Danmark, New Public Management, dansk, University College, PLC, Rigspolitiet, Fyns politi, Folketinget, venstre, sosu-assistent, demokrati, Socialdemokratiet, Folketinget, skattevæsenet, skattevæsenet, Christiansborg, Christiansborg, danskere, danske, Danmark, Danmark, Danmark, Mint, Sjælsmark, dansk, Danmark, Danmark, Danmarks, Anden Verdenskrig, Europa, Sønderjylland, Danmark, Tyskland, europæiske, Storbritannien, Europa, Europa, Europa, Middelhavet, Bruxelles, Danmark, dansk, Europapolitik, Arktis, Færøerne, Grønland, Færøernes, Grønlands, nordatlantiske, Folketinget, Færøerne, Bárður á, Nielsen, Færøerne, Grønland, Naalakkersuisut, Kim Kielsen, grønlandske, Grønland, Danmark, grønlandske, Grønlands, Danmarks, dansk, Danmark, Christiansborg, De Radikale, SF, Enhedslisten, Alternativet, De konservative, Dansk Folkeparti, Liberal Alliance, Dansk, danske, danske, Himmerland, Danmark, Danmark, Danmark, Danmark, Folketinget, Danmark, DANMARK)
(Danmark, Danmark, Mona, FOA-arbejdspladser, Kirkens Korshærs, Odense, SOSU-skoler, Vejle, Randers, Herlev, Silkeborg, Skive, København, Odense, sosu-uddannelserne, New Public Management, Plejecenter Ærtebjerghaven, Folketinget, Ældresagen, Folketinget, FOA, FOA, dansk)
(Sønderjylland, Danmark, Sønderjyderne, Første Verdenskrig, Danmark, Danmark, tyske, danske, danskere, Danmark, Danmark, dannebrogsflag, Aalborg, danske, Danmark, Marienborg, Danmark, Danmark, Danmark, Danmark, 90’erne, Danmarkshistorie, danskere, Danmark, København, Danmark, sønderjyderne, Danmark, danske, Danmark, Danmarks, dansk, dansk, Danmark)
(anden verdenskrig, holocaust, nazisternes, Auschwitz-Birkenau, Holocaust, jøder, Auschwitz-Birkenau, Auschwitz, europæiske, Yad Yashem, Anti-semitismen, holocaust, jødiske, Jerusalem, danske, holocaust, Jytte Israel, Robert Fischermann, Jytte, europæiske, jøder, danskere, danske, Sverige, Islands Brygge, tyske, franske, Roberts, tyskerne, Robert, Theresienstadt, Robert, Robert, nazisternes, Europa, Robert, nazisterne, nazisterne, jøde, Robert, Robert, Jytte, jøder, holocaust, antisemitismen, jøder, Danmark, Danmark, Danmark, danskere, danske, jøder, holocaust, jøde, dansker, danske, danske, Danmark, Europa)
(Danmark, demokrati, Sarah Hartvigsen Juncker)
(danskere, danskere, corona, Corona-virus, Italien, Europa, Italien, Danmark, Danmark, danskere, Danmark, Danmark, Danmark, DSB, intercity, Folketinget, danskere, Udenrigsministeriet, Norditalien, Iran, Kina, Sydkorea, Østrig, Danmark, Danmark, Danmark, corona, Danskerne, danskernes, Folketingets, Folketingets, Statsministeriet, danskerne, danskerne, Danmark)
(corona, Danmark, corona, Danmark, corona, Danmark, Danmark, danske, Folketingets, Danmark, corona, danske, Danmark, sundhedsministeren, Danmark, corona, arbejdsmarkedets parter, danske, Danmark, Danmark, Danmark, danskernes, Corona-udbruddet, Danmark, Danmark, corona-virus, Folketingets partier, danske, Folketingets, corona, corona, Danske, Danmark, corona, Danmark, corona, Europa, Danmark, Danmark, København, danske, Danmark, anden verdenskrig, Danmark, danske, Danmark, Danmark)
(Statsministeriet, corona, Danmark, danskere, Corona, Danmark, Corona-virus, italiener, amerikaner, dansker, Danmark, Danmark, Danmark, Danmark, Danmark, corona, corona, Coronaen, Danmark, Danmark, HF, danskerne, Folketinget, Venstre, Danmark, USA, amerikanere, danske, socialområdet, Danmark, Folketingets, Folketingets, Danmark, danskere, Danmark, Danmark, corona-tid, Danmark, Danmark, coronaen)
(Danmark, corona, Danmark, danskere, Danmarks, demokrati, Kastellet, Danmarks, Danmarks, Al Asad Air Base, Irak, danske, danske, irakiske, ISIL, danske, Irak, Europas, Frontex, Grækenland, NATO, s, Afghanistan, Estland, Afrikas, danske, Covid-19, Mali, dansk, FN’s mission, covid-19, FN, corona, franske, corona, Sahel-regionen, Europa, danskere, Danmark, Danmark)
(corona-smittetal, Storkøbenhavn, Storkøbenhavn, Hjørring, Ringsted, Aarhus, Odense, København, Hovedstadsregionen, Storkøbenhavn, Danmark, Seruminstituttet, corona, corona, Marienborg, Danmark, CO2-, Danmark, Power-to-X, Star Wars, Danmark, Danmark, dansk, Holland, Aalborg Portland, Aalborg Portland, Danmark, Danmark, verdensplan, Aalborg Portland, Aalborg Portland, Aalborg Portlands, Nordsøen, Folketingets, Danmarks, Folketinget, danske, pick the winners, Danmark, EU-kommissionen, EU, s, Danmark, Folketinget)
(Danmarks, Carlsberg, Fredericia, Kalundborg, Fyn, Storebælt, DI, Lars, DI, s, Danmark, DI-plan, Dansk, Nationalbanken, Dansk, danske, danske, Danmark, DI, corona, Danske, FN, Danmark, Danmark, EU, europæisk, Folketinget, Indiens, Modi, Indiens, Indien, Danmark, Indiens, Modi, Indien, Danmark, Danmark, Indien, Indien, Kina, Indien, Udenrigsministeriets, danske, danske, danske, dansk, corona, danske, danske, Danmark, corona, dansk, Danmark)
(Danmarks, Danmark, Danmark, coronaen, Europa, Danmark, corona, Europa, Danmark, Danmarks, Coronaen, Danmark, Folketinget, dansk, Folketinget, Danmarkshistoriens, dansk, Danmark, Folketinget, Danmark, Danmark, dansk, danske, Tårnby, socialdemokratisk, danske, Danmark, Danmark, 80’erne, danske, Folketinget, coronaen, corona, Danmark, coronaen, europæiske, Danmark, Indien, Indien, Kina, Indien, Indiens, Modi, Indien, Danmark, Indien, Danmark, danske, Danmarks, corona, danske, Danmarks, Danmark, Danmark, Motalavej, justitsministeren, boligministeren, Korsør, Danmark, Danmark, Danmark, Brønshøj, Gellerup, Gentofte, danske, danske, Brønshøj, København, Danmark, Motalavej, Danmark, europæisk, Middelhavet, sydeuropæiske, Danmark, Europa, Danmark, Europa, Arktis, Rusland, amerikansk, Arktis, USA, USA, Europa, Arktis, Arktis, Nordatlanten, Folketinget, Folketingets, Radikale Venstre, Enhedslisten, SF, Danmark, Danmark, Else, Folketinget, Corona, Social-, sosu, Rigspolitiet, dansk, Frederikssund, Fredericia, Viborg, Esbjerg, Christiansborg, 80’erne, Poul Schlüter, frikommuneforsøg, VK-regeringen, Det Radikale Venstre, danske, Helsingør, Helsingør, Danmarks, Helsingør, Rebild, Viborg, Middelfart, Holbæk, Langeland, Esbjerg, Folkeskolen, Esbjerg, Holbæk, Christiansborg, Folketinget, Folketinget, travbanen, Danmark, Danmarks, Danmark, SF, Radikale Venstre, Enhedslisten, EU's, finansministeren, Danmark, Coronaen, Danmarks, Danmark, Folketingets, Danmark, Mette Frederiksen, the Danish Parliament, Denmark, Denmark, Denmark, Europe, Denmark, Europe, reopen society, Denmark, Denmark, Corona, almost 7 percent, Danish economy, Denmark, Sectoral partnerships, Danish cooperation, who, Denmark, Denmark, Danish economy, Danish, Tårnby, Denmark, Denmark, Denmark, Danish, Even though, Denmark, Denmark, European Recovery Fund, India, Denmark, India, Denmark, Danish jobs, Denmark, Danish industrial businesses, Denmark, Denmark, Denmark, Korsør, several residents, Denmark, Denmark, social and healthcare, pedagogues, obdurate culture, Denmark, broad daylight, Denmark, Denmark, who, Brønshøj last week, Gellerup, Gentofte, Danish, Danish, Danish, Copenhagen against, Denmark, a train station, Denmark, Motalavej, Denmark, European, Europe, Denmark, EU, Denmark, Europe, Europe, Russian, Europe, North Atlantic area, Danish, Radikale Venstre, the Danish Social-Liberal Party, Enhedslisten, the Red-Green Alliance, SF, the Socialist People's, Denmark, porta cabin, Denmark, green areas, Danish National Police, local police, Danish National Police, Frederikssund, Fredericia, Viborg, Esbjerg, Christiansborg, Poul Schlüter, Danish, Helsingør, Helsingør, Denmark, Helsingør, Rebild, Viborg, Middelfart, Holbæk, Langeland, Esbjerg, the public school, municipal councils, nursing home unit, Esbjerg, Holbæk, Denmark, Christiansborg, green energy, Denmark, green research strategy, Denmark, new great wind turbine, green fuel, Denmark, Det Radikale Venstre, Enhedslisten, green transition, green economic, European, Green ambitions, social, Denmark, Denmark, Denmark, Denmark)
(Corona, dansk, Danmark, Danmark, Danmark, Danmark, danskerne, Europa, Danmark, Lizette, Danmark, Danish Crown, Danish Crown, NNF, Danmark, Socialdemokratiet, Arnes, FH, socialdemokratisk, danske)
(Anden Verdenskrig, Danmark, danskere, danskere, Danmark, Folketinget, Hal Koch, Demokrati, demokrati, Dansk, Danmark, Danmark, Folketinget, dansk, dansk, Danmark, danske, demokratiet, Danmark, dansker, Danmark, Haslev, Roskilde, Vordingborg, Maribo, Nakskov, Nykøbing, Kalundborg, Faaborg, Haderslev, Dronninglund, København, Jylland, Sjælland, Fyn, øerne, København, Jylland, Sjælland, Fyn, danskere, Lolland-Falster, Vestsjælland, Nordjylland, København, Danmark, spanske, Martin Andersen Nexø, Danmark, Danmark, Danmark, Danmark, Mette Frederiksen, new year, World War II, Denmark, chronic illness, Denmark, Public employees, Danish, Denmark, Denmark, greenhouse gas emissions, offshore wind from energy islands, new national parks, untouched forests, Danish agriculture, Danish businesses, Denmark, Danish, Denmark, Haslev, Roskilde, Vordingborg, Maribo, Nakskov, Nykøbing, Kalundborg, Faaborg, Haderslev, Dronninglund, Police stations, Copenhagen, Zealand, Copenhagen, new local, Jutland, Zealand, Lolland-Falster, West Zealand, North Jutland, Copenhagen, Denmark, Spanish flu, Martin Andersen Nexø, Denmark, Denmark exists, Denmark, Denmark)
###Markdown
Oh well, look at that `Danmark` seems quite popular. Well given that let us examine how Mette describes Denmark. Let's first make a simple example:
###Code
doc = nlp("velkommen til skønne Danmark")
displacy.render(doc)
###Output
_____no_output_____
###Markdown
Notice how DK is describes using the adjective *'skønne'* and that this captured by the parsing tag *amod*. This can be extracted quite easily as follows:
###Code
[t for t in doc[3].subtree if t.dep_ == "amod"] # doc[3] corresponds to Danmark
###Output
_____no_output_____
###Markdown
Similarly to before we can now add a method for doing this for all docs. Notice this function is only ever called when you extract the variable. This it is not really running before you need it.
###Code
def ent_desc_getter(doc, entity="danmark"):
"""
return words which describes the entity
assumes entity is length 1
"""
for ent in doc.ents:
if ent.text.lower() == entity:
out = [t for t in doc[ent.start].subtree if t.dep_ == "amod"]
if out:
for i in out:
yield i
Doc.set_extension("dk_desc", getter=ent_desc_getter)
# Testing it out on one speech
doc = nlp(speeches[0])
list(doc._.dk_desc)
# testing it out on all the speeches
docs = nlp.pipe(speeches)
for doc in docs:
print(list(doc._.dk_desc))
###Output
[hele]
[hele, hele]
[]
[grønnere]
[grønnere, hele, grønt]
[solidarisk, store]
[]
[]
[]
[hele]
[hele]
[Hele, mange]
[hele, alle, hele]
[grønt]
[]
[hele]
[]
[]
###Markdown
Naturally, one could extend this. One might wish to filter by the tag as well e.g. by only showing adjectives. Similarly, this approach does not catch even simple cases such as *"Danmark er det skønneste land"*. In which case you can either parse the tree further and/or use coreference resolution. This conludes the tutorial. If you wish to work more on Danish NLP and DaCy feel free to contribute to its development. How to Contribute---DaCy is by no means perfect and there is still some notable limitaitons:- Lemmatization: It currently uses a lookup table for lemmatization based on the training corpus, a more viable solution is to use the `lemmy` package for SpaCy v2 but it need to be updated.- POS-tags: Currently POS-tags are assigned to the `tag_` not the `pos_` label. This needs to be fixed- DaCy is trained on a fairly small training corpus, any data augmentation and/or increase in training will likely results in improved performance. - DaCy notable does not include a sentiment analysis component. There is multiple reasons for this, the primary being that DaNE is not tagged for sentiment and sentiment analysis still lacks a clear definition.If you make progress in any of these (or something else which you find relevant), please feel free to reach out.
###Code
def doc_to_dict_getter(doc, include=["token", "lemma", ...]):
# construct dict using a loop over include
out = {"token": [], "lemma": [], "pos": [], "ner": [], "dep": []}
for t in doc:
if "token" in out:
out["token"] = t.text
# ...
return out
###Output
_____no_output_____ |
courses/machine_learning/deepdive/04_features/.ipynb_checkpoints/a_features-checkpoint.ipynb | ###Markdown
Trying out features **Learning Objectives:** * Improve the accuracy of a model by adding new features with the appropriate representation The data is based on 1990 census data from California. This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Set UpIn this first cell, we'll load the necessary libraries.
###Code
import math
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.INFO)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
###Output
_____no_output_____
###Markdown
Next, we'll load our data set.
###Code
df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
###Output
_____no_output_____
###Markdown
Examine and split the dataIt's a good idea to get to know your data a little bit before you work with it.We'll print out a quick summary of a few useful statistics on each column.This will include things like mean, standard deviation, max, min, and various quantiles.
###Code
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Now, split the data into two parts -- training and evaluation.
###Code
msk = np.random.rand(len(df)) < 0.8
traindf = df[msk]
evaldf = df[~msk]
###Output
_____no_output_____
###Markdown
Training and EvaluationIn this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target).We'll modify the feature_cols and input function to represent the features you want to use.Since our data is at block level, and we want to make predictions at the house level, we divide total_rooms by households to get the average number of rooms per house in that block.
###Code
# Add more features to dataframe that you think will be representative of the data distribution
def add_more_features(df):
df['num_rooms'] = df['total_rooms'] / df['households']
return df
# Create pandas input function
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = add_more_features(df),
y = df['median_house_value'] / 100000, # will talk about why later in the course
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
# Define your feature columns
def create_feature_cols():
return [
tf.feature_column.numeric_column('housing_median_age'),
tf.feature_column.bucketized_column(tf.feature_column.numeric_column('latitude'), boundaries = np.arange(32.0, 42, 1).tolist()),
tf.feature_column.numeric_column('num_rooms'),
tf.feature_column.numeric_column('median_income')
]
# Create estimator train and evaluate function
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(model_dir = output_dir, feature_columns = create_feature_cols())
train_spec = tf.estimator.TrainSpec(input_fn = make_input_fn(traindf, 8),
max_steps = num_train_steps)
eval_spec = tf.estimator.EvalSpec(input_fn = make_input_fn(evaldf, 1),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds,
throttle_secs = 10) # evaluate every N seconds
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run the model
OUTDIR = './trained_model'
shutil.rmtree(OUTDIR, ignore_errors = True)
train_and_evaluate(OUTDIR, 5000)
###Output
_____no_output_____ |
test/Executing_quantum_programs_on_IBMQ.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz Özlem Salehi | November 29, 2019 (updated) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\inner}[2]{\langle 1,2\rangle} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Executing Quantum Programs on IBMQ We create a quantum circuit.
###Code
# import the objects from qiskit
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# create classical and quantum register objects a quantum circuit
qreg = QuantumRegister(2) # my quantum register
creg = ClassicalRegister(2) # my classical register
circuit = QuantumCircuit(qreg,creg) # my quantum circuit
# apply a Hadamard gate to the first qubit
circuit.h(qreg[0])
# set the second qubit to |1>
circuit.x(qreg[1])
# apply CNOT(first_qubit,second_qubit)
circuit.cx(qreg[0],qreg[1])
# measure the both qubits
circuit.measure(qreg,creg)
###Output
_____no_output_____
###Markdown
We draw the circuit.
###Code
circuit.draw(output="mpl")
###Output
_____no_output_____
###Markdown
IBM Q TestNow, we test our system for accesing to IBM Q. The remaining part requires internet connection.Our test circuit will be executed on IBM simulator, and then on one of IBM real qauntum computers. Please wait the execution of each cell to be completed, before executing the next cell. Joining IBM Q ExperienceIn order to use IBM services, one should be a member of IBM Q Experience.Sign up and then sign in hereAfter signing into the system, go to My Account (top-right icon)There you can copy your API key, which is used when connecting to IBM platforms.You can also see IBM Q Backend Access (available to you, under maintenance, etc.) and your Units. Save your API on the diskPlease write YOUR IBM API TOKEN in the following cell, and then run the cell.Once they are saved on the disk, they can be used directly later.
###Code
from qiskit import IBMQ
IBMQ.save_account('write YOUR IBM API TOKEN here')
# Then, execute this cell
###Output
_____no_output_____
###Markdown
See the stored account(s)
###Code
IBMQ.stored_account()
###Output
_____no_output_____
###Markdown
Load our account(s)
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
See the active account(s)
###Code
IBMQ.active_account()
###Output
_____no_output_____
###Markdown
Get provider
###Code
provider = IBMQ.get_provider()
print(provider)
###Output
_____no_output_____
###Markdown
See available backends
###Code
provider.backends()
###Output
_____no_output_____
###Markdown
See the currently operational real quantum computer(s)
###Code
provider.backends(operational=True, simulator=False)
###Output
_____no_output_____
###Markdown
See the least busy real quantum computer
###Code
from qiskit.providers.ibmq import least_busy
least_busy(provider.backends(simulator=False))
###Output
_____no_output_____
###Markdown
IBMQ simulator Use the simulator as backend
###Code
simulator_backend = provider.get_backend('ibmq_qasm_simulator')
simulator_backend.name()
###Output
_____no_output_____
###Markdown
Execute the circuit
###Code
job = execute(circuit, backend=simulator_backend, shots=1024)
###Output
_____no_output_____
###Markdown
Check the result
###Code
result = job.result()
counts = result.get_counts()
print(counts)
###Output
_____no_output_____
###Markdown
IBMQ real quantum computers (Optional) Use the least busy real machine as backend
###Code
backend_real = least_busy(provider.backends(simulator=False))
backend_real.name()
backend_real.status()
###Output
_____no_output_____
###Markdown
Select a specific backend
###Code
provider.backends()
vigo=provider.get_backend('ibmq_vigo')
print(vigo)
###Output
_____no_output_____
###Markdown
Execute the same job on a real machine Depending on the number of pending jobs, it might take for a while to execute your job on the real machine. But, this is not a problem to complete our tutorial, because we use the local simulator during the tutorial.
###Code
job_real = compile(circuit, backend=backend_real, shots=1024)
#job_real = execute(circuit, backend=vigo, shots=1024)
job_real.queue_position()
###Output
_____no_output_____
###Markdown
Check the result
###Code
result_real = job_real.result()
counts_real = result_real.get_counts()
print(counts_real)
###Output
{'01': 439, '00': 22, '10': 548, '11': 15}
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) updated by Özlem Salehi | January 5, 2020 This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Executing Quantum Programs on IBMQ We create a quantum circuit.
###Code
# import the objects from qiskit
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# create classical and quantum register objects a quantum circuit
qreg = QuantumRegister(2) # my quantum register
creg = ClassicalRegister(2) # my classical register
circuit = QuantumCircuit(qreg,creg) # my quantum circuit
# apply a Hadamard gate to the first qubit
circuit.h(qreg[0])
# set the second qubit to |1>
circuit.x(qreg[1])
# apply CNOT(first_qubit,second_qubit)
circuit.cx(qreg[0],qreg[1])
# measure the both qubits
circuit.measure(qreg,creg)
###Output
_____no_output_____
###Markdown
We draw the circuit.
###Code
circuit.draw(output="mpl")
###Output
_____no_output_____
###Markdown
IBM Q TestNow, we test our system for accesing to IBM Q. The remaining part requires internet connection.Our test circuit will be executed on IBM simulator, and then on one of IBM real qauntum computers. Please wait the execution of each cell to be completed, before executing the next cell. Joining IBM Q ExperienceIn order to use IBM services, one should be a member of IBM Q Experience.Sign up and then sign in hereAfter signing into the system, go to My Account (top-right icon)There you can copy your API key, which is used when connecting to IBM platforms.You can also see IBM Q Backend Access (available to you, under maintenance, etc.) and your Units. Save your API on the diskPlease write YOUR IBM API TOKEN in the following cell, and then run the cell.Once they are saved on the disk, they can be used directly later.
###Code
from qiskit import IBMQ
IBMQ.save_account('write YOUR IBM API TOKEN here')
# Then, execute this cell
###Output
_____no_output_____
###Markdown
See the stored account(s)
###Code
IBMQ.stored_account()
###Output
_____no_output_____
###Markdown
Load our account(s)
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
See the active account(s)
###Code
IBMQ.active_account()
###Output
_____no_output_____
###Markdown
Get provider
###Code
provider = IBMQ.get_provider(hub='ibm-q')
print(provider)
###Output
_____no_output_____
###Markdown
See available backends
###Code
provider.backends()
###Output
_____no_output_____
###Markdown
See the currently operational real quantum computer(s)
###Code
provider.backends(operational=True, simulator=False)
###Output
_____no_output_____
###Markdown
See the least busy real quantum computer
###Code
from qiskit.providers.ibmq import least_busy
least_busy(provider.backends(simulator=False))
###Output
_____no_output_____
###Markdown
IBMQ simulator This is IBMQ's simulator. You can use the simulator as backend for running large circuits
###Code
simulator_backend = provider.get_backend('ibmq_qasm_simulator')
simulator_backend.name()
###Output
_____no_output_____
###Markdown
Execute the circuit
###Code
job = execute(circuit, backend=simulator_backend, shots=1024)
###Output
_____no_output_____
###Markdown
Check the result
###Code
result = job.result()
counts = result.get_counts()
print(counts)
###Output
_____no_output_____
###Markdown
IBMQ real quantum computers (Optional) You can use the least busy real machine as backend
###Code
backend_real = least_busy(provider.backends(simulator=False))
backend_real.name()
backend_real.status()
###Output
_____no_output_____
###Markdown
You can also select a specific backend
###Code
provider.backends()
vigo=provider.get_backend('ibmq_vigo')
vigo.status()
###Output
_____no_output_____
###Markdown
Execute the same job on a real machine Depending on the number of pending jobs, it might take for a while to execute your job on the real machine. But, this is not a problem to complete our tutorial, because we use the local simulator during the tutorial.
###Code
job_real = execute(circuit, backend=backend_real, shots=1024)
#job_real = execute(circuit, backend=vigo, shots=1024)
job_real.queue_position()
###Output
_____no_output_____
###Markdown
Check the result
###Code
result_real = job_real.result()
counts_real = result_real.get_counts()
print(counts_real)
###Output
_____no_output_____
###Markdown
prepared by Abuzer Yakaryilmaz (QLatvia) updated by Özlem Salehi | January 5, 2020 This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Executing Quantum Programs on IBMQ We create a quantum circuit.
###Code
# import the objects from qiskit
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
# create classical and quantum register objects a quantum circuit
qreg = QuantumRegister(2) # my quantum register
creg = ClassicalRegister(2) # my classical register
circuit = QuantumCircuit(qreg,creg) # my quantum circuit
# apply a Hadamard gate to the first qubit
circuit.h(qreg[0])
# set the second qubit to |1>
circuit.x(qreg[1])
# apply CNOT(first_qubit,second_qubit)
circuit.cx(qreg[0],qreg[1])
# measure the both qubits
circuit.measure(qreg,creg)
###Output
_____no_output_____
###Markdown
We draw the circuit.
###Code
circuit.draw(output="mpl")
###Output
_____no_output_____
###Markdown
IBM Q TestNow, we test our system for accesing to IBM Q. The remaining part requires internet connection.Our test circuit will be executed on IBM simulator, and then on one of IBM real qauntum computers. Please wait the execution of each cell to be completed, before executing the next cell. Joining IBM Q ExperienceIn order to use IBM services, one should be a member of IBM Q Experience.Sign up and then sign in hereAfter signing into the system, go to My Account (top-right icon)There you can copy your API key, which is used when connecting to IBM platforms.You can also see IBM Q Backend Access (available to you, under maintenance, etc.) and your Units. Save your API on the diskPlease write YOUR IBM API TOKEN in the following cell, and then run the cell.Once they are saved on the disk, they can be used directly later.
###Code
from qiskit import IBMQ
IBMQ.save_account('write YOUR IBM API TOKEN here')
# Then, execute this cell
###Output
_____no_output_____
###Markdown
See the stored account(s)
###Code
IBMQ.stored_account()
###Output
_____no_output_____
###Markdown
Load our account(s)
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
See the active account(s)
###Code
IBMQ.active_account()
###Output
_____no_output_____
###Markdown
Get provider
###Code
provider = IBMQ.get_provider(hub='ibm-q')
print(provider)
###Output
_____no_output_____
###Markdown
See available backends
###Code
provider.backends()
###Output
_____no_output_____
###Markdown
See the currently operational real quantum computer(s)
###Code
provider.backends(operational=True, simulator=False)
###Output
_____no_output_____
###Markdown
See the least busy real quantum computer
###Code
from qiskit.providers.ibmq import least_busy
least_busy(provider.backends(simulator=False))
###Output
_____no_output_____
###Markdown
IBMQ simulator This is IBMQ's simulator. You can use the simulator as backend for running large circuits
###Code
simulator_backend = provider.get_backend('ibmq_qasm_simulator')
simulator_backend.name()
###Output
_____no_output_____
###Markdown
Execute the circuit
###Code
job = execute(circuit, backend=simulator_backend, shots=1024)
###Output
_____no_output_____
###Markdown
Check the result
###Code
result = job.result()
counts = result.get_counts()
print(counts)
###Output
_____no_output_____
###Markdown
IBMQ real quantum computers (Optional) You can use the least busy real machine as backend
###Code
backend_real = least_busy(provider.backends(simulator=False))
backend_real.name()
backend_real.status()
###Output
_____no_output_____
###Markdown
You can also select a specific backend
###Code
provider.backends()
vigo=provider.get_backend('ibmq_vigo')
vigo.status()
###Output
_____no_output_____
###Markdown
Execute the same job on a real machine Depending on the number of pending jobs, it might take for a while to execute your job on the real machine. But, this is not a problem to complete our tutorial, because we use the local simulator during the tutorial.
###Code
job_real = execute(circuit, backend=backend_real, shots=1024)
#job_real = execute(circuit, backend=vigo, shots=1024)
job_real.queue_position()
###Output
_____no_output_____
###Markdown
Check the result
###Code
result_real = job_real.result()
counts_real = result_real.get_counts()
print(counts_real)
###Output
_____no_output_____ |
modules/interpretability/class_lung_lesion.ipynb | ###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
ToTensord,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(data=val_files, transform=val_transforms)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.densenet.densenet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.densenet.densenet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1pQdzdkkC9c2GOblLgpGlG3vxsSK9NtDx[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
EnsureTyped,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
EnsureTyped("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
EnsureTyped(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds), replace=False)
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
ToTensord,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1pQdzdkkC9c2GOblLgpGlG3vxsSK9NtDx[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
EnsureTyped,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
url = "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/lung_lesion_patches.tar.gz"
monai.apps.download_and_extract(url, output_dir=data_path)
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
EnsureTyped("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
EnsureTyped(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds), replace=False)
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches. The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
try:
import monai
except ImportError:
%pip install "monai[tqdm]"
import os
import glob
import random
import numpy as np
import torch
from torch.utils.tensorboard import SummaryWriter
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import tempfile
from sklearn.metrics import classification_report, confusion_matrix, ConfusionMatrixDisplay
import monai
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld, Compose, LoadImaged, RandRotate90d,
Resized, ScaleIntensityRanged, ToTensord,
RandFlipd, RandSpatialCropd,
)
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
use_patch_dataset=True # switch this to use partial dataset or whole thing
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
if use_patch_dataset:
data_path=os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
else:
task="Task06_Lung"
monai.apps.DecathlonDataset(root_dir=root_dir, task=task, section="training", download=True)
%run -i ./bbox_gen.py {root_dir}
data_path=os.path.join(root_dir, "patch")
lesion = glob.glob(os.path.join(data_path,"lesion_*"))
non_lesion = glob.glob(os.path.join(data_path,"norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(f"Before balance -- Num lesion: {len(lesion)}, num non-lesion: {len(non_lesion)}")
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(f"After balance -- Num lesion: {len(lesion)}, num non-lesion: {len(non_lesion)}")
labels = np.asarray([[0., 1.]] * len(lesion) + [[1., 0.]] * len(non_lesion))
all_files = [{"image": img, "label": label} for img, label in zip(lesion + non_lesion, labels)]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(np.asarray(val_labels)== 1)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged("image", a_min=-1000.0, a_max=500.0, b_min=0.0, b_max=1.0, clip=True),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged("image", a_min=-1000.0, a_max=500.0, b_min=0.0, b_max=1.0, clip=True),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
train_ds = monai.data.PersistentDataset(data=train_files, transform=train_transforms)
train_loader = monai.data.DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True)
val_ds = monai.data.PersistentDataset(data=val_files, transform=val_transforms)
val_loader = monai.data.DataLoader(val_ds, batch_size=2, num_workers=2, pin_memory=True)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.densenet.densenet121(spatial_dims=3, in_channels=1, out_channels=2).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
from IPython.display import clear_output
# start training
val_interval = 1
total_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(total_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{total_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = batch_data["image"].to(device), batch_data["label"].to(device)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = val_data["image"].to(device), val_data["label"].to(device)
val_outputs = model(val_images)
value = torch.eq(val_outputs.argmax(dim=1), val_labels.argmax(dim=1))
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(model.state_dict(), "best_metric_model_classification3d_array.pth")
print(
"current epoch: {} current accuracy: {:.4f} best accuracy: {:.4f} at epoch {}".format(
epoch + 1, metric, best_metric, best_metric_epoch
)
)
print(f"train completed, best_metric: {best_metric:.4f} at epoch: {best_metric_epoch}")
plt.plot(epoch_loss_values,label='training loss')
val_epochs=np.linspace(1, total_epochs, np.floor(total_epochs/val_interval).astype(np.int32))
plt.plot(val_epochs, metric_values,label='validation acc')
plt.legend()
plt.xlabel('Epoch')
plt.ylabel('Value');
# Reload the best network and display info
model_3d = monai.networks.nets.densenet.densenet121(spatial_dims=3, in_channels=1, out_channels=2).to(device)
model_3d.load_state_dict(torch.load("best_metric_model_classification3d_array.pth"))
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"]))
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize='true',
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor='white')[1]);
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(nn_module=model_3d, target_layers="class_layers.relu")
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print("original feature shape", cam.feature_map_size([1, 1] + list(win_size), device))
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(nn_module=model_3d, mask_size=12, n_batch=1, stride=28)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box=[-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25,15), facecolor='white')
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[item] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)):
cmap = 'gray' if row == 0 else 'jet'
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis('off')
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
from monai.visualize import plot_2d_or_3d_image
from torch.utils.tensorboard import SummaryWriter
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q monai[tqdm]
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
ToTensord,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(data=val_files, transform=val_transforms)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.densenet.densenet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.densenet.densenet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1pQdzdkkC9c2GOblLgpGlG3vxsSK9NtDx[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
EnsureTyped,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
EnsureTyped("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
EnsureTyped(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1pQdzdkkC9c2GOblLgpGlG3vxsSK9NtDx[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
EnsureTyped,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
url = "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/lung_lesion_patches.tar.gz"
monai.apps.download_and_extract(url, output_dir=data_path)
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
EnsureTyped("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", spatial_size=win_size, mode="trilinear", align_corners=True),
EnsureTyped(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds), replace=False)
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
ToTensord,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(data=val_files, transform=val_transforms)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.For the demo data:- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1pQdzdkkC9c2GOblLgpGlG3vxsSK9NtDx[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
###Code
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
EnsureTyped,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
###Output
Before balance -- Num lesion: 64, num non-lesion: 187
After balance -- Num lesion: 64, num non-lesion: 64
total items: 128
###Markdown
Split the data into 80% training and 20% validation
###Code
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
###Output
total train: 103
total valid: 25
###Markdown
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
###Code
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
EnsureTyped("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
EnsureTyped(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
###Output
_____no_output_____
###Markdown
Start the model, loss function, and optimizer.
###Code
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
###Output
_____no_output_____
###Markdown
Run training iterations.
###Code
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
###Output
precision recall f1-score support
non-lesion 1.00 0.86 0.92 14
lesion 0.85 1.00 0.92 11
accuracy 0.92 25
macro avg 0.92 0.93 0.92 25
weighted avg 0.93 0.92 0.92 25
###Markdown
InterpretabilityUse GradCAM and occlusion sensitivity for network interpretability.The occlusion sensitivity returns two images: the sensitivity image and the most probable class.* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded. * Big decreases in the probability imply that that region was important in inferring the given class * The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
###Code
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
###Output
_____no_output_____ |
examples/ch03/snippets_ipynb/03_13.ipynb | ###Markdown
3.13 Boolean Operators `and`, `or` and `not` Boolean Operator `and`
###Code
gender = 'Female'
age = 70
if gender == 'Female' and age >= 65:
print('Senior female')
###Output
_____no_output_____
###Markdown
Boolean Operator `or`
###Code
semester_average = 83
final_exam = 95
if semester_average >= 90 or final_exam >= 90:
print('Student gets an A')
###Output
_____no_output_____
###Markdown
Boolean Operator `not`
###Code
grade = 87
if not grade == -1:
print('The next grade is', grade)
if grade != -1:
print('The next grade is', grade)
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____
###Markdown
3.13 Built-In Function `range`: A Deeper Look
###Code
for number in range(5, 10):
print(number, end=' ')
for number in range(0, 10, 2):
print(number, end=' ')
for number in range(10, 0, -2):
print(number, end=' ')
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____ |
HW4 - Linear Regression Boston Housing.ipynb | ###Markdown
Simple Linear Regression Here, we are implementing simple linear regression from scratch. You will use the pandas library to load the csv file into a dataframe:
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# read the csv file and load into a pandas dataframe
# make sure Boston.csv is in the same file path as this notebook
boston = pd.read_csv('Boston.csv')
# read the above link to learn more about what each of the columns indicate
boston.head()
###Output
_____no_output_____
###Markdown
Simple linear regression builds a linear relationship between an input variable $X$ and an output variable $Y$. We can define this linear relationship as follows: $$Y = \beta_0 + \beta_1X$$ Objective: find the linear relationship between the proportion of non-retail business acres per town (indus) and the full-value property-tax rate per 10,000 dollars (tax)So our equation will look like:$$TAX = \beta_0 + \beta_1INDUS$$Here, the coefficient $\beta_0$ is the intercept, and $\beta_1$ is the scale factor or slope. How do we determine the values of these coefficients? There are several different methods to do so, but we will focus on the Ordinary Least Squares (OLS) method. This method minimizes the sum of the squares of the differences between the observed dependent variable and those predicted by the linear function. Recall that a residual is the difference between any data point and the line of regression. When we develop a regression model, we want the sum of the residuals squared to be minimized, indicating that the model is a close fit to the data. $$RSS = \sum_{i =1}^{n} (y_i - f(x_i))^2$$$$= \sum_{i =1}^{n} (y_i - (\beta_0 + \beta_1x_i))^2$$This is the objective function we minimize to find $\beta_0$ and $\beta_1$.
###Code
# set X to 'indus' and y to 'tax'
X = boston['indus']
y = boston['tax']
###Output
_____no_output_____
###Markdown
First, visualize the data by plotting X and y using matplotlib. Be sure to include a title and axis labels.
###Code
# TODO: display plot
plt.plot(X, y, 'o')
# TODO: labels and title
plt.ylabel('Tax')
plt.xlabel('Indus')
plt.title('Boston Tax Vs. Indus')
###Output
_____no_output_____
###Markdown
TODO: What do you notice about the relationship between the variables? A: There seems to be a positive linear relationship but the clumping at the base will most likely lower our r^2 and RSS Next, find the coefficients. The values for $\beta_0$ and $\beta_1$ are given by the following equations, where $n$ is the total number of values. This derivation was done in class. $$\beta_1 = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sum_{i=1}^{n} (x_i - \bar{x})^2}$$$$\beta_0 = \bar{y} - \beta_1\bar{x}$$
###Code
def get_coeffs(X, y):
X_mu = X.mean()
y_mu = y.mean()
b1 = np.sum((y-y_mu)*(X-X_mu))/np.sum((X-X_mu)**2)
b0 = y_mu - b1*X_mu
return(b0,b1)
b0, b1 = get_coeffs(X, y)
print("Regression line: TAX = " + str(round(b0)) + " + " + str(round(b1)) +"*INDUS")
###Output
Regression line: TAX = 211.0 + 18.0*INDUS
###Markdown
Plot the regression line overlayed on the real y-values.
###Code
# TODO: plot y-values
reg_line = b0 + b1*X
plt.plot(X, y, 'o')
plt.plot(reg_line)
plt.xlim(0, 50)
plt.ylabel('Tax')
plt.xlabel('Indus')
plt.title('Boston Tax Vs. Indus')
###Output
_____no_output_____
###Markdown
The line appears to fit the data, but first, let us find the RSS to evaluate this model. The RSS is used to measure the amount of variance in the data set that is not explained by the regression model. Recall that$$RSS = \sum_{i =1}^{n} (y_i - (\beta_0 + \beta_1x_i))^2$$
###Code
# TODO: implement function
def get_RSS(b0, b1, X, y):
'''
Params:
b0: beta 0
b1: beta 1
X: X vector
y: y vector
Returns:
residual sum of squares (RSS)
'''
RSS = np.sum((y - (b0 + b1*X))**2)
return RSS
# run this cell to print RSS
print("RSS:", get_RSS(b0, b1, X, y))
###Output
RSS: 6892554.224031562
###Markdown
We can also evaluate the model through the Root Mean Squared Error (RMSE) and the Coefficient of Determination ($R^2$ score). - The RMSE is similar to the RSS, but provides a value with more interpretable units -- in our case, tax rate per 10,000 dollars. - The $R^2$ value represents the proportion of the variance for the dependent variable that is explained by the independent variable. Use the following equations to find the RMSE and $R^2$ score:$$ RMSE = \sqrt(\sum_{i=1}^{n} \frac{1}{n} (\hat{y_i} - y_i)^2 )$$$$ R^2 = 1 - \frac{SS_r}{SS_t} $$ where$$SS_t = \sum_{i = 1}^{n} (y_i - \bar{y})^2$$and$$SS_r = \sum_{i=1}^{n} (y_i - \hat{y_i})^2$$
###Code
# TODO: implement function
def get_RMSE(b0, b1, X, y):
'''
Params:
b0: beta 0
b1: beta 1
X: X vectore
y: y vector
Returns:
rmse
'''
rmse = (np.sum(y - b0 + b1 * X))**1/2
return(rmse)
# run cell to print RMSE
print("RMSE: ", get_RMSE(b0, b1, X, y))
# TODO: implement function
def get_R2(b0, b1, X, y):
'''
Params:
b0: beta 0
b1: beta 1
X: X vector
y: y vector
Returns:
r2 score
'''
RSS = get_RSS(b0, b1, X, y)
TSS = np.sum((y - y.mean())**2)
r2 = 1 - (RSS/TSS)
return(r2)
# run cell to print RMSE
print("R2: ", get_R2(b0, b1, X, y))
###Output
R2: 0.519495237003779
###Markdown
TODO: Analyze what the above $R^2$ score indicates about the model. A: Our R^2 above is approximately 0.52. And as we know, R^2 is the percentage of the inputs variation that our trained model explains. Because our R^2 score is only 0.52 while there is probably some positive relationship, this model might need more inputs and interaction terms in order to more accurately predict the output, lowering our R^2 term. Now, we will compare the above results with the results from using scikit-learn, a machine learning library in Python. Read the documentation (https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) to learn how to use this library. Return the $R^2$ score and RMSE.
###Code
# TODO: scikit learn function
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
def linear_regression_SKL(X, y):
'''
Params:
X: X vector
y: y vector
Returns:
rmse and r2 as a tuple
'''
x = np.array(X).reshape((-1, 1))
reg = LinearRegression().fit(x,y)
y_pred = reg.predict(x)
rmse = np.sqrt(mean_squared_error(y, y_pred))
r2 = reg.score(x, y)
return(rmse, r2)
# run this cell to print results from SKL LR
linear_regression_SKL(X, y)
###Output
_____no_output_____ |
FinalSubmission_Week2.ipynb | ###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 2 - Pandas IntroductionAll questions are weighted the same in this assignment. Part 1The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on [All Time Olympic Games Medals](https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table), and does some basic data cleaning. The columns are organized as of Summer games, Summer medals, of Winter games, Winter medals, total number of games, total of medals. Use this dataset to answer the questions below.
###Code
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
###Output
_____no_output_____
###Markdown
Question 0 (Example)What is the first country in df?*This function should return a Series.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Which country has won the most gold medals in summer games?*This function should return a single string value.*
###Code
def answer_one():
return df['Gold'].argmax()
answer_one()
###Output
_____no_output_____
###Markdown
Question 2Which country had the biggest difference between their summer and winter gold medal counts?*This function should return a single string value.*
###Code
def answer_two():
df['G_diff']=abs(df['Gold']-df['Gold.1'])
return df['G_diff'].argmax()
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$Only include countries that have won at least 1 gold in both summer and winter.*This function should return a single string value.*
###Code
def answer_three():
df['T_Gold']=df['Gold']+df['Gold.1']+df['Gold.2']
eligible_df=df[(df['Gold']>0) & (df['Gold.1']>0)]
eligible_df['G_diff_rel']=abs(eligible_df['Gold']-eligible_df['Gold.1'])/eligible_df['T_Gold']
eligible_df['G_diff_rel'].argmax()
return eligible_df['G_diff_rel'].argmax()
answer_three()
###Output
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
after removing the cwd from sys.path.
###Markdown
Question 4Write a function that creates a Series called "Points" which is a weighted value where each gold medal (`Gold.2`) counts for 3 points, silver medals (`Silver.2`) for 2 points, and bronze medals (`Bronze.2`) for 1 point. The function should return only the column (a Series object) which you created, with the country names as indices.*This function should return a Series named `Points` of length 146*
###Code
def answer_four():
Points=df['Gold.2']*3 +df['Silver.2']*2+df['Bronze.2']*1
return Points
answer_four()
###Output
_____no_output_____
###Markdown
Part 2For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. [See this document](https://www2.census.gov/programs-surveys/popest/technical-documentation/file-layouts/2010-2015/co-est2015-alldata.pdf) for a description of the variable names.The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate. Question 5Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)*This function should return a single string value.*
###Code
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
# census_df = pd.read_csv('census.csv')
df1=census_df.set_index(['STNAME'])
u=df1.index.unique()
s=[len(df1.loc[i]) for i in u]
S2=pd.Series(s, index=u)
return S2.argmax()
answer_five()
###Output
_____no_output_____
###Markdown
Question 6**Only looking at the three most populous counties for each state**, what are the three most populous states (in order of highest population to lowest population)? Use `CENSUS2010POP`.*This function should return a list of string values.*
###Code
def answer_six():
df6=census_df
u_county=set(df6['CTYNAME'])
u_State=set(df6['STNAME'])
df6=census_df.set_index(['STNAME','CTYNAME'])
s6=df6['CENSUS2010POP']
l=[] #list l for storing sum of 3 largest(populus county) value of each state
for s in u_State: # iterating over all the states
# going in by multilevel index ['Alabama','Alabama County'] this will list each county index
k=s6.loc[[s,s]].nlargest(3).sum() # k is the sum of 3 largst county for each state
l.append(k) # storing value of each k in list l
pdS=pd.Series(l, index=u_State) # Making a new series from l (list of ) with its indexes from u_state
pdS.nlargest(3) # again applying nlargest on this series
dd=pdS.nlargest(3)
# print(dd.index)
l=dd.index
j=[(l[i]) for i in range(len(l))]
# print(type(l[0]))
return j
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.*This function should return a single string value.*
###Code
def answer_seven():
df7a=census_df[census_df['SUMLEV']==50] #filtering all the county with same name as state
u_county=set(df7a['CTYNAME']) # Reading all the u_county
u_State=set(df7a['STNAME']) # Reading all the u_state
df7b=df7a.set_index(['CTYNAME','STNAME'])
df7=df7b[['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']]
df7['Min']=df7.loc[:,['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']].min(axis=1)
df7['Max']=df7.loc[:,['POPESTIMATE2010','POPESTIMATE2011','POPESTIMATE2012','POPESTIMATE2013','POPESTIMATE2014','POPESTIMATE2015']].max(axis=1)
df7['Diff']=df7['Max']-df7['Min'] # Column showing difference between highest populus and lowest populus year
largest_C=df7['Diff'].nlargest(1) # finding the largest among difference column
r=largest_C.argmax() # returning the index of our largest difference val
return r[0]
answer_seven()
# return "YOUR ANSWER HERE"
###Output
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:10: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# Remove the CWD from sys.path while we load stuff.
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# This is added back by InteractiveShellApp.init_path()
C:\Users\wlaik\Anaconda3\lib\site-packages\ipykernel_launcher.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if sys.path[0] == '':
###Markdown
Question 8In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.*This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).*
###Code
def answer_eight():
census_df = pd.read_csv('census.csv')
df8a=census_df[(census_df['REGION']==2) | (census_df['REGION']==1)]
df8b=df8a[(df8a['CTYNAME']=='Washington County')]
df8c=df8b[(df8b['POPESTIMATE2015']>df8b['POPESTIMATE2014'])]
df8d=df8c[['STNAME','CTYNAME','POPESTIMATE2014','POPESTIMATE2015']]
df8e=df8d[['STNAME','CTYNAME']]
return df8e
answer_eight()
###Output
_____no_output_____ |
minimos-quadrados/03-funcao-exponencial.ipynb | ###Markdown
Ajuste de Curvas a Uma Lista de Pontos Método dos Mínimos Quadrados--- Função ExponencialDada um lista de $n$ ponto no plano $\mathbb{R}^2$: $(x_1,y_1), (x_2,y_2), \ldots, (x_n,y_n)$. Um ajuste de uma curva da forma $y(x) = \alpha + \beta e^x$ à lista de pontos pode ser obtida, pelo método dos mínimos quadrados, com:$$x = (A^tA)^{-1}(A^tb)$$Onde $A = \left[ \begin{matrix} 1 & e^{x_1} \\ 1 & e^{x_2} \\ \vdots & \vdots \\ 1 & e^{x_n} \end{matrix} \right]$, $b = \left[ \begin{matrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{matrix} \right]$.O vetor $x$, resultado das operações $(A^tA)^{-1}(A^tb)$, tem a forma $x = \left[ \begin{matrix} \alpha \\ \beta \end{matrix} \right]$, onde $\alpha$ e $\beta$ são os coeficientes da função exponencial ajustada:$$y(x) = \alpha + \beta e^x$$ --- Exemplo 1Vamos ajustar uma curva exponencial da forma $y(x) = \alpha + \beta e^x$ aos pontos: $(1,1),(3,2),(5,3),(6,5),(7,7)$ No [GeoGebra](https://www.geogebra.org/m/ajynvkuz) marcamos os pontos no plano, para termos uma ideia de como será a curva ajustadas aos mesmos.
###Code
# Usando Python para plotar os pontos listados
# Importando o pyplot: biblioteca python para plotagem de gráficos
from matplotlib import pyplot as plt
# Eixo x, Eixo y
plt.plot([1,3,5,6,7],[1,2,3,5,7],'o') # O argumento 'o' plota os pontos
plt.axis([0, 8, 0, 8]) # [xmin, xmax, ymin, ymax]
plt.xlabel('x'), plt.ylabel('y') # Rótulos dos eixos x e y
plt.grid() # Exibindo a grade do plano
plt.show()
# Resolvendo a operações x = (A^tA)^(-1)(A^tb)
# Importando a biblioteca para trabalha com matrizes e matemática
import numpy as np
# Criando e exibindo as matrizes necessárias para os cálculos
# Atribuindo uma aproximação de e à variável e
e = 2.71828
A = np.array([[1,e**1], [1,e**3], [1,e**5], [1,e**6], [1,e**7]])
b = np.array([[1],[2],[3],[5],[7]])
A, b
M = A.T # a variável M recebe a tranposta da matriz A
M = M.dot(A) # a variável M recebe o protudo MA
# usamos o método inv do pacote de algebra linear de numpy para inverter M
M = np.linalg.inv(M)
M # Exibindo M
N = A.T # a variável N recebe a transposta de A
N = N.dot(b) # a variável N recebe o produto Nb
N # Exibindo N
# Com as operações realizadas temos
# M = (A^tA)^(-1) e N = A^tb
# A viável x recebe o produto MN
x = M.dot(N)
# Exibindo o vetor x
x
###Output
_____no_output_____
###Markdown
Assim nossa função, com alguns arendodamentos, é da forma $$ y=1.921 + 0.005 ℯ^{x} $$ Usamos o geogebra para plotar a curva e os pontos. [GeoGebra](https://www.geogebra.org/m/thuykp7m)
###Code
# Usando Python para plotar os pontos e a curva ajustada
# Plotando os pontos
plt.plot([1,3,5,6,7],[1,2,3,5,7],'o')
# Preparo para a plotagem da curva
x = np.linspace(0, 8, 1000) # 1000 pontos em [0, 8]
y = 1.921 + 0.005*e**x # Calculo dos valores y para cada x
# Plotando a curva
plt.plot(x,y)
# Configurações do Plano
plt.axis([0, 8, 0, 8])
plt.xlabel('x'), plt.ylabel('y')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
--- Algoritmo para ajuste de uma curva exponencial a uma lista de pontos
###Code
'''
Método para o ajuste da curva de uma Função Exponencial a uma lista de pontos
Mínimos Quadrados
Observações:
--> Os argumentos da função são duas listas de mesmo tamanho;
--> A primeira lista com as coordenadas x dos pontos que queremos ajustar;
--> E a segunda lista com as coordenadas y dos pontos.
A função retorna um vetor x = [a,b]^T, onde y = a + b*e^x é a função exponencial.
'''
def exponencial_minimos_quadrados(X,Y):
import numpy as np
from numpy.linalg import inv
n = len(X)
if len(X) != len(Y):
return 'Dados de entrada incorretos.'
else:
A = np.zeros((n,2))
v = np.zeros((n,1))
for i in range(len(X)):
v[i] = Y[i]
A[i][0] = 1
A[i][1] = (np.e)**(X[i])
At = np.transpose(A)
M = np.dot(At,A)
N = np.dot(At,v)
Mi = inv(M)
x = np.dot(Mi,N)
return x
# Aplicação do método para a lista de pontos
# (1,1),(3,2),(5,3),(6,5),(7,7)
X = [1,3,5,6,7]
Y = [1,2,3,5,7]
print(exponencial_minimos_quadrados(X,Y))
# Aplicando o método para outra lista de pontos
# (1.5,1), (2.5,1.8), (3,4), (4,7)
X = [1.5,2.5,3,4]
Y = [1,1.8,4,7]
c = exponencial_minimos_quadrados(X,Y) # Vetor coeficientes
print(c)
# Plotando os pontos e a curva ajustada
from matplotlib import pyplot as plt
import numpy as np
# Plotando os pontos
plt.plot(X,Y,'o')
x = np.linspace(0, 8, 1000)
y = c[0] + c[1]*(np.e)**x # Calculo dos valores y para cada x
# Plotando a curva
plt.plot(x,y)
# Configurações do Plano
plt.axis([0, 8, 0, 8])
plt.xlabel('x'), plt.ylabel('y')
plt.grid()
plt.show()
###Output
_____no_output_____ |
meanderpy/meanderpy.ipynb | ###Markdown
Input parameters
###Code
W = 200.0 # channel width (m)
D = 16.0 # channel depth (m)
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
nit = 2000 # number of iterations
Cf = 0.03 # dimensionless Chezy friction factor
crdist = 2*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 30 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.01 # initial slope (matters more for submarine channels than rivers)
t1 = 300 # time step when incision starts
t2 = 600 # time step when lateral migration starts
t3 = 2000 # time step when aggradation starts
aggr_factor = 2 # aggradation factor (it kicks in after t3)
###Output
_____no_output_____
###Markdown
Initialize model
###Code
# from imp import reload
reload(mp)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
###Output
_____no_output_____
###Markdown
Run simulation
###Code
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60) # plotting
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Build 3D model
###Code
chb_3d = chb.build_3d_model('submarine',h_mud=1.0,levee_width=3000.0,h=10.0,w=W,bth=10.0,dcr=10.0,dx=20.0,
delta_s=deltas,starttime=chb.cl_times[50],endtime=chb.cl_times[55])
fig1,fig2,fig3 = chb_3d.plot_xsection(400, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 4)
# plot thickness of last channel sand
plt.figure()
plt.imshow(chb_3d.strat[:,:,-1]-chb_3d.strat[:,:,-2],cmap='viridis')
plt.colorbar()
plt.figure()
plt.plot(chb.channels[0].x,chb.channels[0].z,'b')
plt.plot(chb.channels[-1].x,chb.channels[-1].z,'r')
###Output
_____no_output_____
###Markdown
Input parameters
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
nit = 2000 # number of iterations
Cf = 0.022 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1400 # time step when aggradation starts
aggr_factor = 4e-9 # aggradation factor (m/s, about 0.18 m/year, it kicks in after t3)
###Output
_____no_output_____
###Markdown
Initialize model
###Code
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
###Output
_____no_output_____
###Markdown
Run simulation
###Code
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat', 20, 60, chb.cl_times[-1], len(chb.channels)) # plotting
###Output
100%|██████████| 2000/2000 [00:45<00:00, 44.06it/s]
###Markdown
Create a "geomorphologic" display that takes into account that older point bars and cutoffs are covered by vegetation:
###Code
fig = chb.plot('morph', 20, 60, chb.cl_times[-1], len(chb.channels))
###Output
_____no_output_____
###Markdown
Create a map that is colored by the age of the point bars:
###Code
fig = chb.plot('age', 20, 60, chb.cl_times[-1], len(chb.channels))
###Output
_____no_output_____
###Markdown
Create movie
###Code
dirname = '/Users/zoltan/Dropbox/Channels/temp/'
chb.create_movie(xmin=10000, xmax=30000, plot_type='strat', filename='movie', dirname=dirname,
pb_age = 1, ob_age = 20, end_time = chb.cl_times[-1], n_channels = len(chb.channels))
###Output
_____no_output_____
###Markdown
Build 3D fluvial model Non-interactive definition of x- and y-extentIf the parameters 'xmin', 'xmax', ymin', and 'ymax' are non-zero (as in the cell below), they will be used to define the extent of the area of interest used to build the 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models.
###Code
h_mud = 0.3 # thickness of overbank deposit for each time step
dx = 20.0 # gridcell size in meters
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('fluvial', h_mud=h_mud, levee_width=800.0, h=12.0, w=W,
bth=0.0, dcr=10.0, dx=dx, delta_s=deltas, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
xmin=7000, xmax=12000, ymin=-3500, ymax=3500)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(100, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
_____no_output_____
###Markdown
Interactive definition of x- and y-extentAfter you run the next cell, you need to select the upper left and lower right corners of the area of interest for which you want to build a 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models. The area will only be highlighted (as a red rectangle) after the 3d model building has finished.
###Code
h_mud = 0.3 # thickness of overbank deposit for each time step
dx = 20.0 # gridcell size in meters
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('fluvial', h_mud=h_mud, levee_width=800.0, h=12.0, w=W,
bth=0.0, dcr=10.0, dx=dx, delta_s=deltas, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
xmin=0, xmax=0, ymin=0, ymax=0)
###Output
_____no_output_____
###Markdown
Build 3D submarine channel model
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 50 # padding (number of nodepoints along centerline)
deltas = 100.0 # sampling distance along centerline
nit = 1500 # number of iterations
Cf = 0.02 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 50 # approximate number of bends you want to model
Sl = 0.01 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1000 # time step when aggradation starts
aggr_factor = 4.0 # aggradation factor (it kicks in after t3)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60,chb.cl_times[-1],len(chb.channels)) # plotting
###Output
100%|██████████| 1500/1500 [00:18<00:00, 81.94it/s]
###Markdown
Interactive definition of x- and y-extentAfter you run the next cell, you need to select the upper left and lower right corners of the area of interest for which you want to build a 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models. The area will only be highlighted (as a red rectangle) after the 3d model building has finished.
###Code
h_mud = 3.0*np.ones((len(chb.cl_times[20:]),))
dx = 15.0
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('submarine', h_mud=h_mud, levee_width=5000.0, h=12.0, w=W,
bth=6.0, dcr=7.0, dx=dx, delta_s=deltas, starttime=chb.cl_times[20], endtime=chb.cl_times[-1],
xmin=0,xmax=0,ymin=0,ymax=0)
fig1,fig2,fig3 = chb_3d.plot_xsection(100, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 10)
###Output
_____no_output_____
###Markdown
Input parameters
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
nit = 2000 # number of iterations
Cf = 0.022 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1400 # time step when aggradation starts
aggr_factor = 4e-9 # aggradation factor (m/s, about 0.18 m/year, it kicks in after t3)
###Output
_____no_output_____
###Markdown
Initialize model
###Code
from imp import reload
reload(mp)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
###Output
_____no_output_____
###Markdown
Run simulation
###Code
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60) # plotting
###Output
_____no_output_____
###Markdown
Build 3D fluvial model
###Code
h_mud = 0.4 # thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('fluvial',h_mud=h_mud,levee_width=4000.0,h=12.0,w=W,bth=0.0,
dcr=10.0,dx=dx,delta_s=deltas,starttime=chb.cl_times[20],endtime=chb.cl_times[-1],
xmin=0,xmax=0,ymin=0,ymax=0)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
_____no_output_____
###Markdown
Build 3D submarine channel model
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 50 # padding (number of nodepoints along centerline)
deltas = 100.0 # sampling distance along centerline
nit = 1500 # number of iterations
Cf = 0.02 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 50 # approximate number of bends you want to model
Sl = 0.01 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1000 # time step when aggradation starts
aggr_factor = 4.0 # aggradation factor (it kicks in after t3)
reload(mp)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60) # plotting
h_mud = 3.0*np.ones((len(chb.cl_times[20:]),))
dx = 15.0
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('submarine',h_mud=h_mud,levee_width=5000.0,h=12.0,w=W,bth=6.0,
dcr=7.0,dx=dx,delta_s=deltas,starttime=chb.cl_times[20],endtime=chb.cl_times[-1],
xmin=0,xmax=0,ymin=0,ymax=0)
fig1,fig2,fig3 = chb_3d.plot_xsection(400, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 10)
###Output
_____no_output_____
###Markdown
Input parameters
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
nit = 2000 # number of iterations
Cf = 0.022 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1400 # time step when aggradation starts
aggr_factor = 4e-9 # aggradation factor (m/s, about 0.18 m/year, it kicks in after t3)
###Output
_____no_output_____
###Markdown
Initialize model
###Code
from imp import reload
reload(mp)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
###Output
_____no_output_____
###Markdown
Run simulation
###Code
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60) # plotting
###Output
Percent: [####################] 99.95% 0000000001%
###Markdown
Build 3D fluvial model
###Code
h_mud = 0.4 # thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('fluvial',h_mud=h_mud,levee_width=4000.0,h=12.0,w=W,bth=0.0,
dcr=10.0,dx=dx,delta_s=deltas,starttime=chb.cl_times[20],endtime=chb.cl_times[-1],
xmin=0,xmax=0,ymin=0,ymax=0)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
_____no_output_____
###Markdown
Build 3D submarine channel model
###Code
W = 200.0 # channel width (m)
D = 12.0 # channel depth (m)
pad = 50 # padding (number of nodepoints along centerline)
deltas = 100.0 # sampling distance along centerline
nit = 1500 # number of iterations
Cf = 0.02 # dimensionless Chezy friction factor
crdist = 1.5*W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0E-11 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 50 # approximate number of bends you want to model
Sl = 0.01 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1000 # time step when aggradation starts
aggr_factor = 4.0 # aggradation factor (it kicks in after t3)
reload(mp)
ch = mp.generate_initial_channel(W,D,Sl,deltas,pad,n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch],cutoffs=[],cl_times=[0.0],cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,Cf,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60) # plotting
h_mud = 3.0*np.ones((len(chb.cl_times[20:]),))
dx = 15.0
chb_3d, xmin, xmax, ymin, ymax = chb.build_3d_model('submarine',h_mud=h_mud,levee_width=5000.0,h=12.0,w=W,bth=6.0,
dcr=7.0,dx=dx,delta_s=deltas,starttime=chb.cl_times[20],endtime=chb.cl_times[-1],
xmin=0,xmax=0,ymin=0,ymax=0)
fig1,fig2,fig3 = chb_3d.plot_xsection(100, [[0.5,0.25,0],[0.9,0.9,0],[0.5,0.25,0]], 10)
###Output
_____no_output_____
###Markdown
Input parameters
###Code
nit = 2000 # number of iterations
W = 200.0 # channel width (m)
D = 6.0 # channel depth (m)
depths = D * np.ones((nit,)) # channel depths for different iterations
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
Cfs = 0.011 * np.ones((nit,)) # dimensionless Chezy friction factor
crdist = 2 * W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0e-12 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1200 # time step when aggradation starts
aggr_factor = 2e-9 # aggradation factor (m/s, about 0.18 m/year, it kicks in after t3)
###Output
_____no_output_____
###Markdown
Initialize model
###Code
ch = mp.generate_initial_channel(W, depths[0], Sl, deltas, pad, n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch], cutoffs=[], cl_times=[0.0], cutoff_times=[]) # create channel belt object
###Output
_____no_output_____
###Markdown
Run simulation
###Code
chb.migrate(nit,saved_ts,deltas,pad,crdist,depths,Cfs,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat', 20, 60, chb.cl_times[-1], len(chb.channels)) # plotting
# check the z-profiles (to see whether there is the right amount of incision/aggradation):
plt.figure()
for channel in chb.channels:
plt.plot(channel.x, channel.z, 'k', linewidth=0.5)
###Output
_____no_output_____
###Markdown
Create a "geomorphologic" display that takes into account that older point bars and cutoffs are covered by vegetation:
###Code
fig = chb.plot('morph', 20, 60, chb.cl_times[-1], len(chb.channels))
###Output
_____no_output_____
###Markdown
Create a map that is colored by the age of the point bars:
###Code
fig = chb.plot('age', 20, 60, chb.cl_times[-1], len(chb.channels))
###Output
_____no_output_____
###Markdown
Create movie
###Code
dirname = '/Users/zoltan/Dropbox/Channels/temp/'
chb.create_movie(xmin=10000, xmax=30000, plot_type='strat', filename='movie', dirname=dirname,
pb_age = 1, ob_age = 20, end_time = chb.cl_times[-1], n_channels = len(chb.channels))
###Output
_____no_output_____
###Markdown
Build 3D fluvial model Non-interactive definition of x- and y-extentIf the parameters 'xmin', 'xmax', ymin', and 'ymax' are non-zero (as in the cell below), they will be used to define the extent of the area of interest used to build the 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models.
###Code
plt.close('all')
h_mud = 1.0 * np.ones((len(chb.channels),)) # thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
diff_scale = 2.0 * W/dx
v_coarse = 10.0 # deposition rate of coarse overbank sediment, in m/year (excluding times of no flooding)
v_fine = 0.0 # deposition rate of fine overbank sediment, in m/year (excluding times of no flooding)
chb_3d, xmin, xmax, ymin, ymax, dists, zmaps = mp.build_3d_model(chb, 'fluvial',
h_mud=h_mud, h=12.0, w=W,
bth=0.0, dcr=10.0, dx=dx, delta_s=deltas, dt=dt, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
diff_scale=diff_scale, v_fine=v_fine, v_coarse=v_coarse,
xmin=9000, xmax=15000, ymin=-3500, ymax=3500)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
_____no_output_____
###Markdown
Build fluvial model with variable depths and well-defined scrolls
###Code
nit = 2000 # number of iterations
W = 200.0 # channel width (m)
D = 6.0
saved_ts = 20 # which time steps will be saved# channel depth (m)
# create variable depth sequence:
depths = D * np.ones((nit,)) + np.repeat(1.5*(np.random.random_sample(int(nit/saved_ts))-0.5), saved_ts)
pad = 100 # padding (number of nodepoints along centerline)
deltas = 50.0 # sampling distance along centerline
Cfs = 0.011 * np.ones((nit,)) # dimensionless Chezy friction factor
crdist = 2 * W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0e-12 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 900 # time step when lateral migration starts
t3 = 10000 # time step when aggradation starts
aggr_factor = 2e-9 # aggradation factor (m/s, about 0.18 m/year, it kicks in after t3)
ch = mp.generate_initial_channel(W, depths[0], Sl, deltas, pad, n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch], cutoffs=[], cl_times=[0.0], cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,depths,Cfs,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat', 20, 60, chb.cl_times[-1], len(chb.channels)) # plotting
# add a bit more incision:
for i in range(len(chb.channels)):
chb.channels[i].z = np.ones(np.shape(chb.channels[i].x))*(-0.1 * i)
# create 'h_mud' sequence that mimicks the varibaility in depth through time:
depths1 = depths[::saved_ts]
depths1 = np.hstack((depths1[0], depths1))
h_mud = depths1 - 5.0 # maximum thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
# reduce diffusion length scale:
diff_scale = 1.0 * W/dx
# increase deposition rate of coares sediment:
v_coarse = 20.0 # deposition rate of coarse overbank sediment, in m/year (excluding times of no flooding)
v_fine = 0.0 # deposition rate of fine overbank sediment, in m/year (excluding times of no flooding)
chb_3d, xmin, xmax, ymin, ymax, dists, zmaps = mp.build_3d_model(chb, 'fluvial',
h_mud=h_mud, h=12.0, w=W,
bth=0.0, dcr=10.0, dx=dx, delta_s=deltas, dt=dt, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
diff_scale=diff_scale, v_fine=v_fine, v_coarse=v_coarse,
xmin=8000, xmax=15000, ymin=-3500, ymax=3500)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
100%|██████████| 100/100 [00:41<00:00, 2.41it/s]
###Markdown
Interactive definition of x- and y-extentAfter you run the next cell, you need to select the upper left and lower right corners of the area of interest for which you want to build a 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models. The area will only be highlighted (as a red rectangle) after the 3d model building has finished.
###Code
chb_3d, xmin, xmax, ymin, ymax, dists, zmaps = mp.build_3d_model(chb, 'fluvial',
h_mud=h_mud, h=12.0, w=W,
bth=0.0, dcr=10.0, dx=dx, delta_s=deltas, dt=dt, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
diff_scale=diff_scale, v_fine=v_fine, v_coarse=v_coarse,
xmin=0, xmax=0, ymin=0, ymax=0)
# create plots
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.9,0.9,0],[0.5,0.25,0]], 4)
###Output
_____no_output_____
###Markdown
Build 3D submarine channel model
###Code
nit = 2000 # number of iterations
W = 200.0 # channel width (m)
D = 6.0 # channel depth (m)
depths = D * np.ones((nit,)) # channel depths for different iterations
pad = 50 # padding (number of nodepoints along centerline)
deltas = W/4 # sampling distance along centerline
Cfs = 0.011 * np.ones((nit,)) # dimensionless Chezy friction factor
crdist = 1.5 * W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0e-12 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.0 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1300 # time step when aggradation starts
aggr_factor = 2e-8 # aggradation factor
ch = mp.generate_initial_channel(W, depths[0], Sl, deltas, pad, n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch], cutoffs=[], cl_times=[0.0], cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,depths,Cfs,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60,chb.cl_times[-1],len(chb.channels)) # plotting
# check the z-profiles (to see whether there is the right amount of incision/aggradation):
plt.figure()
for channel in chb.channels:
plt.plot(channel.x, channel.z, 'k', linewidth=0.5)
###Output
_____no_output_____
###Markdown
Interactive definition of x- and y-extentAfter you run the next cell, you need to select the upper left and lower right corners of the area of interest for which you want to build a 3D model. At least initially, it is a good idea to keep this segment relatively small (only a few bends long) to avoid building very large models. The area will only be highlighted (as a red rectangle) after the 3d model building has finished.
###Code
h_mud = 4.0 * np.ones((len(chb.cl_times),)) # thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
diff_scale = 3 * W/dx
v_coarse = 4.0 # deposition rate of coarse overbank sediment, in m/year (excluding times of no flow in channel)
v_fine = 0.0 # deposition rate of fine overbank sediment, in m/year (excluding times of no flow in channel)
chb_3d, xmin, xmax, ymin, ymax, dists, zmaps = mp.build_3d_model(chb,
'submarine', h_mud=h_mud, h=15.0, w=W,
bth=4.0, dcr=6.0, dx=dx, delta_s=deltas, dt=dt, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
diff_scale=diff_scale, v_fine=v_fine, v_coarse=v_coarse,
xmin=0, xmax=0, ymin=0, ymax=0)
# xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax)
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.9,0.9,0], [0.5,0.25,0]], 10)
###Output
_____no_output_____
###Markdown
Build submarine channel model with along-channel slope
###Code
nit = 2000 # number of iterations
W = 200.0 # channel width (m)
D = 6.0 # channel depth (m)
depths = D * np.ones((nit,)) # channel depths for different iterations
pad = 50 # padding (number of nodepoints along centerline)
deltas = W/4 # sampling distance along centerline
Cfs = 0.011 * np.ones((nit,)) # dimensionless Chezy friction factor
crdist = 1.5 * W # threshold distance at which cutoffs occur
kl = 60.0/(365*24*60*60.0) # migration rate constant (m/s)
kv = 1.0e-12 # vertical slope-dependent erosion rate constant (m/s)
dt = 2*0.05*365*24*60*60.0 # time step (s)
dens = 1000 # density of water (kg/m3)
saved_ts = 20 # which time steps will be saved
n_bends = 30 # approximate number of bends you want to model
Sl = 0.01 # initial slope (matters more for submarine channels than rivers)
t1 = 500 # time step when incision starts
t2 = 700 # time step when lateral migration starts
t3 = 1300 # time step when aggradation starts
aggr_factor = 4 # aggradation factor
ch = mp.generate_initial_channel(W, depths[0], Sl, deltas, pad, n_bends) # initialize channel
chb = mp.ChannelBelt(channels=[ch], cutoffs=[], cl_times=[0.0], cutoff_times=[]) # create channel belt object
chb.migrate(nit,saved_ts,deltas,pad,crdist,depths,Cfs,kl,kv,dt,dens,t1,t2,t3,aggr_factor) # channel migration
fig = chb.plot('strat',20,60,chb.cl_times[-1],len(chb.channels)) # plotting
# check the z-profiles (to see whether there is the right amount of incision/aggradation):
plt.figure()
for channel in chb.channels:
plt.plot(channel.x, channel.z, 'k', linewidth=0.5)
h_mud = 4.0 * np.ones((len(chb.cl_times),)) # thickness of overbank deposit for each time step
dx = 10.0 # gridcell size in meters
diff_scale = 3 * W/dx
v_coarse = 4.0 # deposition rate of coarse overbank sediment, in m/year (excluding times of no flow in channel)
v_fine = 0.0 # deposition rate of fine overbank sediment, in m/year (excluding times of no flow in channel)
chb_3d, xmin, xmax, ymin, ymax, dists, zmaps = mp.build_3d_model(chb,
'submarine', h_mud=h_mud, h=15.0, w=W,
bth=4.0, dcr=6.0, dx=dx, delta_s=deltas, dt=dt, starttime=chb.cl_times[0], endtime=chb.cl_times[-1],
diff_scale=diff_scale, v_fine=v_fine, v_coarse=v_coarse,
xmin=0, xmax=0, ymin=0, ymax=0)
fig1,fig2,fig3 = chb_3d.plot_xsection(200, [[0.9,0.9,0], [0.5,0.25,0]], 10)
###Output
_____no_output_____ |
DecisionTrees.ipynb | ###Markdown
**Decision Tree**: testing an attribut and branching the cases.* node: test.* branch: result of a test.* leaf node: assing to a class.* Pure node: 100% of cases in node are in same class.* Entropy: Measure of randomness or uncertainty. lower entropy => less uniform distribution => purer the node.* *Entropy = -p(A).log2(p(A)) - p(B).log2(p(B))** Information Gain: the information that can increase the level of certainty after splitting.* *information gain = (entropy before split)-(weighted entropy after split)*.> More Predictivness.> Less Impurity.> Lower Entropy.> Heigher IG. Decision Tree Split 5 Medications for patients.
###Code
# downloading the data
!wget -O drug200.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/drug200.csv
import numpy as np
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
df = pd.read_csv('/content/drug200.csv')
df.info()
df.head()
X = df.drop('Drug',axis=1).values
X[:5]
# get different categories of categorical variables.
df['Cholesterol'].value_counts()
# Sex,BP,Cholesterol are categorical variables.
# lets use dummy/indicator variables insted to work with sklearn.
from sklearn import preprocessing
di_sex = preprocessing.LabelEncoder()
di_BP = preprocessing.LabelEncoder()
di_Chol = preprocessing.LabelEncoder()
di_sex.fit(['F','M'])
di_BP.fit(['HIGH','LOW','NORMAL'])
di_Chol.fit(['HIGH','NORMAL'])
X[:,1] = di_sex.transform(X[:,1])
X[:,2] = di_BP.transform(X[:,2])
X[:,3] = di_Chol.transform(X[:,3])
X[:5]
Y = df['Drug']
Y[:5]
df['Drug'].value_counts()
# train test split
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,Y,test_size=0.3,random_state=3)
x_test.shape
x_train.shape
drugTree = DecisionTreeClassifier(criterion='entropy',max_depth=4)
drugTree
drugTree.fit(x_train,y_train)
# prediction
predTree = drugTree.predict(x_test)
print(predTree[:5])
print(y_test[:5])
# evaluation
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test,predTree))
# without sklearn
isTrue_ = (y_test == predTree)
isTrue_[:5]
isTrue_.sum()
print('accuracy: ',isTrue_.sum()/isTrue_.shape[0])
# visualization
from sklearn.externals.six import StringIO
import pydotplus
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn import tree
%matplotlib inline
dot_data = StringIO()
filename = "drugtree.png"
featuresNames = df.columns[:5]
targetNames = df['Drug'].unique().tolist()
out = tree.export_graphviz(drugTree,feature_names=featuresNames,out_file=dot_data,class_names=np.unique(y_train),filled=True,special_characters=True,rotate=False)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png(filename)
img = mpimg.imread(filename)
plt.figure(figsize=(100,200))
plt.imshow(img,interpolation='nearest')
plt.show()
###Output
_____no_output_____
###Markdown
*Essa aula é inteiramente baseada no **capítulo 6** do livro [Hands-On Machine Learning with Scikit-Learn & TensorFlow](http://shop.oreilly.com/product/0636920052289.do) by Aurélien Geron; os notebooks do livro estão disponíveis [no GitHub](https://github.com/ageron/handson-ml).* Decision TreesAs Decision Trees são algoritmos versáteis de ML **Supervisionado**, que podem ser usadas para para **classificação** e **regressão**. O que são Decision TreesPara entender o que é uma Decision Tree, vamos começar **Treinando e Visualizando uma Decision Tree**.Construiremos um Decision Tree e veremos como ela faz predições.O código abaixo treina um _DecisionTreeClassifier_ no conjunto de dados de iris.**Iris dataset**: Famoso conjunto de dados o comprimento e a largura das pétalas e sépalas de 150 flores Iris de 3 espécies diferentes: Iris-Setosa, Iris-Versicolor e Iris-Virginica. Vamos começar com os imports e setups habituais.
###Code
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris() # returns a dictionary-like object
print("iris dataset keys: ", iris.keys())
print('iris["target_names"]: ', iris['target_names'])
print("iris['target'] = ", iris['target'])
print("target lenght: ", len(iris['target']))
print("iris['feature_names: ']", iris['feature_names'])
print("iris['data'][0,:] = ", iris.data[0,:])
print("iris['data'][0, 2:] = ", iris.data[0,2:])
X = iris.data[:, 2:] # petal length and width
y = iris.target
tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)
tree_clf.fit(X, y)
###Output
_____no_output_____
###Markdown
Para visualizar a Decision tree que foi treinada, vamos usar a função [export_graphviz](https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html), do módulo ***sklearn.tree***, que produz um arquivo de saída ***.dot***, que pode então ser convertido para *.png*
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
print(os.getcwd())
from sklearn.tree import export_graphviz
export_graphviz(
tree_clf,
out_file="/content/iris_tree.dot",
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
from google.colab import files
files.download( "iris_tree.dot" )
###Output
_____no_output_____
###Markdown
Você pode instalar o pacote **graphviz** no seu sistema local e executar o comando (na linha de comando do seu terminal) para converter o arquivo ***dot*** em ***png***:> dot -Tpng iris_tree.dot -o iris_tree.pngobs.: o **graphviz** precisa ser instalado no seu sistema (e não apenas para o usuário local) ou o comando acima não funciona.  A **Decision Tree** tem uma estrutura de ***fluxograma***. * Cada "caixinha" é chamada de ***node**** **Root Node:** Representa a população ou amostra e é subdividido em conjuntos homogêneos (de acordo com a decisão do node)* **Splitting:** é o processo de sub-dividir o node em sub-nodes* **Decision Node:** Quando um node se divide em outros sub-nodes, ele é um node de decisão* **Leaf/ Terminal Node:** Os nodes que não se sub-dividem mais são chamados de "folhas" (***leafs***) ou "nodes terminais".* **Pruning:** Quando removemos sub-nodes de um node de decisão, chamamos esse processo de ***prunning***.* **Branch / Sub-Tree:** Uma sub-seção de uma tree inteira é chamada de "galho" (***branch***) ou ***sub-tree***.* **Parent and Child Node:** Um node que é sub-dividido em sub-nodes é chamado de ***parent node***. Os sub-nodes são chamados de ***child node***. Fazendo predições com a Decision TreePartimos de uma amostra de treinamento ( conjunto de dados de ***iris***) e construímos uma **decision tree** que aprendeu com o nosso conjunto de dados de treinamento.Para fazer predições sobre o tipo de uma flor de íris desconhecida, a nossa tree começará pelo **root node** e seguirá o fluxograma até determinar o tipo de flor de íris. E assim por diante para cada nova flor de íris a ser classificada.Na figura acima, cada node tem uma série de variáveis:* **samples**: é um atributo. Conta a quantas instâncias de treinamento o node se aplica. * **depth**: é a ***profundidade da tree***, definida pelas "camadas" formadas pelos sub-nodes. Nosso exemplo tem depth=2.* **value**: O atributo de um node chamado *value*, nos diz a quantas instâncias de cada classificação (***target_names***) o node se aplica. Por exemplo: [0,1,45] nos diz que esse node se aplica a 0 iris do tipo Setosa, 1 do tipo Versicolor e 45 do tipo Virginica.* **gini**: atributo que mede a ***impureza*** do node. * **node puro:** gini = 0 ==> todas as instâncias às quais o node se aplica pertencem somente a uma classificação. A impureza de Gini (**gini score**) é calculada a partir da seguinte equação:\begin{equation*}G_i = 1 - \sum_{k=1}^{n} p_{i,k}^{2}\end{equation*}onde $p_{i,k}$ é razão do número de instâncias da classe $k$ sobre todas as instâncias de treinamento do *i-ésimo* node.Por exemplo: O node de depth=2 da esquerda na figura (o node verde) tem o seguinte *gini score*:\begin{equation*}G_i = 1 - \left[\left(\frac{1}{54}\right)^2 + \left(\frac{49}{54}\right)^2 + \left(\frac{5}{54}\right)^2\right] \approx 0.168\end{equation*} **Limites de decisão pela Decision Tree**A seguinte função ajuda na visualização dos limites determinados pela Decision Tree
###Code
## script de vizualização da tree
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap='rainbow')
if not iris:
plt.contour(x1, x2, y_pred, cmap='rainbow', alpha=0.8)
if plot_training:
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica")
plt.axis(axes)
if iris:
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
else:
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
if legend:
plt.legend(loc="lower right", fontsize=14)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf, X, y)
plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2)
plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2)
plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2)
plt.text(1.40, 1.0, "Depth=0", fontsize=15)
plt.text(3.2, 1.80, "Depth=1", fontsize=13)
plt.text(4.05, 0.5, "(Depth=2)", fontsize=11)
#save_fig("decision_tree_decision_boundaries_plot")
plt.show()
###Output
_____no_output_____
###Markdown
**Exercício:** Generalize o ***script de visualização da tree*** acima para traçar as retas de decisão para qualquer tree, acessando diretamente os valores de decisão (***threshold***) do objeto tree_ de treinamento. Você pode utilizar esse [script](https://scikit-learn.org/stable/auto_examples/tree/plot_unveil_tree_structure.htmlsphx-glr-auto-examples-tree-plot-unveil-tree-structure-py) do *scikit-learn* como exemplo de acesso à estrutura da tree. No exemplo acima, o **depth=0** representa o limite de decisão do ***root node***. * A área da esquerda é **pura** (Iris-Setosa, gini=0.0, **leaf** da esquerda na nossa tree), ela não pode ser mais divida. * As áreas da direita são divididas pelo limite de decisão tracejado em negrito (depth=1). Como especificamos no nosso treinamento ***max_depth=2***, essas áreas não podem ser subdivididas.* As linhas tracejadas mais claras representam como as áreas rosa e verde seriam sub-divididas caso tivéssemos permitido que nossa tree tivesse uma profundidade (depth) maior. Fazendo estimativas de classes e probabilidadesUma Decision Tree também pode estimar a probabilidade de uma instância pertencer a uma determinada classe $k$:1. percorre a Decision Tree até encontrar o **leaf node**;1. retorna a razão entre as instâncias de treinamento da classe $k$ no node.Por exemplo, se encontrarmos uma folha com pétalas de 5cm de comprimento e 1.5 cm de largura. Isso corresponde à folha da direita em depth=2 do nosso exemplo.* probabilidade de que seja uma iris-setosa: $\frac{0}{54} = 0\%$* probabilidade de que seja uma iris-versicolor: $\frac{49}{54} = 90.7\%$* probabilidade de que seja uma iris-virginica: $\frac{5}{54} = 9.3\%$==> predição da classe: **iris-versicolor** (classe 1), pois tem a maior probabilidade.
###Code
tree_clf.predict_proba([[5, 1.5]])
# lembrando que treinamos a tree somente com os atributos de comprimento e largura
# X = iris.data[:, 2:] # petal length and width
tree_clf.predict([[5, 1.5]]) # para prever a classe.
# lembrando que 0: iris-setosa, 1: iris-versicolor, 2: iris-virginica
###Output
_____no_output_____
###Markdown
O algoritmo de treinamento CARTO **scikit-learn** usa o algoritmo de ***Classification and Regression Tree (CART)*** para treinar as Decision Trees.O algoritmo CART:1. Procura pelo par $(k, t_k)$ = (`feature`, `threshold`) que **maximiza a pureza** de seus sub-conjuntos (ou sub-nodes), pesando cada sub-node de acordo com seu tamanho;1. Uma vez a amostra dividida em duas sub-amostras, divide novamente cada ***node*** aplicando a mesma lógica;1. Aplica essa lógica recursivamente até atingir o valor do hyperparâmetro `max_depth` ou se não conseguir encontrar um valor que reduza a impureza.1. Alguns outros hyperparâmetros controlam a parada do algoritmo: `min_samples_split`, `min_samples_leaf`, `min_weight_fraction_leaf`, `max_leaf_nodes`.A função que o algoritmo tenta minimizar é a seguinte:\begin{equation*}J(k, t_k) = \frac{m_{left}}{m}G_{left} + \frac{m_{right}}{m}G_{right}\end{equation*}Onde* $G_{left/right}$ mede a impureza do subconjunto à esquerda/direita* $m_{left/right}$ é o número de instâncias no subconjunto à esquerda/direita\\O algoritmo CART minimiza a função a nível de `depth`, ou seja, não executa a minimização em outros níveis mais à frente. Impureza Gini ou Entropia?Quando o classificador `tree.DecisionTreeClassifier` é chamado, o hyperparâmetro `criterion='gini'` por default. Mas existe uma outra possibilidade: `criterion='entropy'`, que também é usada para medir a impureza de um node. A entropia é zero quando um `node` contém instâncias de somente uma classe.A equação abaixo mostra a definição de entropia para o `i-ésimo node`:\begin{equation*}H_i = - \sum_{k=1}^{n} p_{i,k}log(p_{i,k})\end{equation*}A entropia para o `node` à esquerda em `depth=2` é dada por:
###Code
import math
H_i = -(49/54)*math.log(49/54) - (5/54)*math.log(5/54)
print(round(H_i,2))
###Output
0.31
###Markdown
Em geral ambos hyperparâmetros levam a Trees similares e o parâmetro Gini é mais rápido de calcular.Quando os dois parâmetros diferem, o Gini tende a isolar a classe mais frequente em uma única `branch`, enquanto a `entropy` produz uma Tree mais balanceada. Hyperparâmetros de RegularizaçãoAs Decision Trees não têm parâmetros pré-definidos, como é o caso, por exemplo de modelos lineares. Sendo assim, se deixadas sem restrições de parâmetros, as `decision trees` tendem a reproduzir os dados de treinamento e consequentemente, resultar em um `overfitting`.Para evitar o `overfitting` dos dados de treinamento, é preciso restringir a "liberdade" da `Decision Tree` durante o treinamento ==> **regularização**.Hyperparâmetros de regularização:* `max_depth`: restringe a "profundidade" da tree. No scikit-learn, o default é "None";* `min_samples_split`: número mínimo de amostras um `node` deve ter antes de se dividir;* `min_samples_leaf`: número mínimo de amostras que um `node` `leaf` deve ter;* `min_weight_fraction_leaf`: fração mínima de amostras em relação ao número de instâncias consideradas com peso. * `max_leaf_nodes`: número máximo de `nodes leaf`;* `max_features`: número máximo de `features` que são consideradas para dividir cada `node`O modelo será **regularizado** ao aumentarmos o valor dos hyperparâmetros `min_*` e diminuirmos os `max_*`A seguir vemos um exemplo de regularização, utilizando um conjunto de dados de "luas" para treinar a nossa Tree.
###Code
from sklearn.datasets import make_moons
Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53)
deep_tree_clf1 = DecisionTreeClassifier(random_state=42)
deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42)
deep_tree_clf1.fit(Xm, ym)
deep_tree_clf2.fit(Xm, ym)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("No restrictions", fontsize=16)
plt.subplot(122)
plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14)
#save_fig("min_samples_leaf_plot")
plt.show()
###Output
_____no_output_____
###Markdown
**Exercício:** Adicione uma terceira Tree, com diferentes hyperparâmetros e compare o resultado com os mostrados acima. RegressãoAs `Decision Trees` também são capazes de executar tarefas de **regressão**. Vamos construir uma `Tree de regressão` usando a classe do scikit-learn [`DecisionTreeRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.htmlsklearn.tree.DecisionTreeRegressor).> treiná-la em um conjunto de dados ruidoso, com `max_depth=2`.
###Code
def quadratic_plus_noise(m = 200):
"""Quadratic training set + noise
m: number of samples
"""
np.random.seed(42)
X = np.random.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + np.random.randn(m, 1) / 10
return X, y
X, y = quadratic_plus_noise()
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg.fit(X, y)
print(tree_reg.tree_.threshold)
from sklearn.tree import export_graphviz
export_graphviz(
tree_reg,
out_file="/content/tree_reg_quad_noise.dot",
feature_names=['x1'],
rounded=True,
filled=True,
precision=4
)
from google.colab import files
files.download( "tree_reg_quad_noise.dot" )
###Output
_____no_output_____
###Markdown
A `Tree` resultante está mostrada abaixo: A `Tree` acima é bem similar à que obtivemos anteriormente. A principal diferença é que ao invés de prever uma classe em cada `node`, ela prevê **um valor**.Se quisermos prever um valor para uma nova instância, $x_1 = 0.6$.> percorremos a `Tree` do topo;> chegamos na `leaf` em que `value = 0.1106` ==> Esse valor é a média dos valores `targets` das 110 instâncias (`sample=110`) desse `node leaf`;> o erro associado a esse valor é o `Erro Quadrático Médio (MSE)`, $\left(\sigma/\sqrt{N}\right)^2$. > No nosso exemplo $MSE = 0.0151$ Vamos visualizar as predições da nossa `Tree`, do modelo Quadrático com ruído, que geramos acima.Utilizamos duas configurações diferentes de hyperparâmetros.
###Code
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2)
tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"):
x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1)
y_pred = tree_reg.predict(x1)
plt.axis(axes)
plt.xlabel("$x_1$", fontsize=18)
if ylabel:
plt.ylabel(ylabel, fontsize=18, rotation=0)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_regression_predictions(tree_reg1, X, y)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
plt.text(0.21, 0.65, "Depth=0", fontsize=15)
plt.text(0.01, 0.2, "Depth=1", fontsize=13)
plt.text(0.65, 0.8, "Depth=1", fontsize=13)
plt.legend(loc="upper center", fontsize=18)
plt.title("max_depth=2", fontsize=14)
plt.subplot(122)
plot_regression_predictions(tree_reg2, X, y, ylabel=None)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
for split in (0.0458, 0.1298, 0.2873, 0.9040):
plt.plot([split, split], [-0.2, 1], "k:", linewidth=1)
plt.text(0.3, 0.5, "Depth=2", fontsize=13)
plt.title("max_depth=3", fontsize=14)
#save_fig("tree_regression_plot")
plt.show()
###Output
_____no_output_____
###Markdown
Note que os valores de predição equivalem à média dos valores das `targets` das instâncias de cada região. **Exercício:** Generalize o script acima para pegar diretamente do objeto tree_ os valores de `threshold` de cada `node` e use esse valores para fazer a visualização da `Tree`. Overfitting na RegressãoAssim como para Classificação, as `Decision Trees` estão sujeitas ao overfitting se não regularizarmos os hyperparâmetros.
###Code
tree_reg1 = DecisionTreeRegressor(random_state=42)
tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
x1 = np.linspace(0, 1, 500).reshape(-1, 1)
y_pred1 = tree_reg1.predict(x1)
y_pred2 = tree_reg2.predict(x1)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", fontsize=18, rotation=0)
plt.legend(loc="upper center", fontsize=18)
plt.title("No restrictions", fontsize=14)
plt.subplot(122)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14)
#save_fig("tree_regression_regularization_plot")
plt.show()
###Output
_____no_output_____
###Markdown
**Exercício:** Tente diferentes configurações de hyperparâmetros e veja como a `Tree` se comporta. InstabilidadeAs `Decision Trees` são muito sensíveis a variações do conjunto de dados de treino.Por exemplo, vejamos o que acontece se retirarmos do conjunto de dados de treino das flores íris com o qual começamos essa aula, a íris-Versicolor com maior largura.
###Code
X = iris.data[:, 2:] # petal length and width
y = iris.target
X[(X[:, 1]==X[:, 1][y==1].max()) & (y==1)] # widest Iris-Versicolor flower
not_widest_versicolor = (X[:, 1]!=1.8) | (y==2)
X_tweaked = X[not_widest_versicolor]
y_tweaked = y[not_widest_versicolor]
tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40)
#tree_clf_tweaked.fit(X_tweaked, y_tweaked)
tree_clf_tweaked.fit(X,y)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False)
plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2)
plt.plot([0, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.text(1.0, 0.9, "Depth=0", fontsize=15)
plt.text(1.0, 1.80, "Depth=1", fontsize=13)
#save_fig("decision_tree_instability_plot")
pyp.show()
###Output
_____no_output_____
###Markdown
###Code
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
###Output
_____no_output_____
###Markdown
Let's start out with classification with decision trees and then move on to regression.
###Code
iris = load_iris()
X = iris.data[:, 2:] # the third attribute and everything after it is the petal length and the width as there are only 4 attributes
y = iris.target
model = DecisionTreeClassifier(max_depth=2)
model.fit(X, y)
###Output
_____no_output_____
###Markdown
Since A decision tree is not a black box model, we can see what rules it is forming in order to classify that data.
###Code
from sklearn.tree import export_graphviz
diagram = export_graphviz(
model,
out_file=("iris_tree.dot"),
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
import graphviz
with open("iris_tree.dot") as f:
dot_graph = f.read()
# remove the display(...)
graphviz.Source(dot_graph)
###Output
_____no_output_____
###Markdown
From this diagram, it is pretty obious why the decision tree is called what it is. It splits the data into to subsets and keeps splitting each subset until it has a result. This is done by the CART training algorithm which recursively splits the data into a tree of rules. Anyhow, it is time for regresion.
###Code
model_2 = DecisionTreeRegressor(max_depth=3)
model_2.fit(X, y)
from sklearn.tree import export_graphviz
diagram_2 = export_graphviz(
model_2,
out_file=("iris_tree_lin.dot"),
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
import graphviz
with open("iris_tree_lin.dot") as f:
dot_graph = f.read()
# remove the display(...)
graphviz.Source(dot_graph)
###Output
_____no_output_____
###Markdown
IntroductionDecision trees are a powerful and popular machine learning technique. The basic concept is very similar to trees you may have seen commonly used to aid decision-making. The decision tree algorithm is a supervised learning algorithm -- we first construct the tree with historical data, and then use it to predict an outcome. One of the major advantages of decision trees is that they can pick up nonlinear interactions between variables in the data that linear regression can'tThis example uses decision trees to look at individual income in the US. The data is from the 1994 census, and contains information on an individual's marital status, age, type of work, and more. The target column, or what we want to predict, is whether individuals make less than or equal to 50k a year, or more than 50k a year.
###Code
import pandas
# Set index_col to False to avoid pandas thinking that the first column is row indexes (it's age)
income = pandas.read_csv("C:/Users/Jennifer/Documents/Python/Data/income.csv", index_col=False)
income.head()
###Output
_____no_output_____
###Markdown
Converting Categorical VariablesAs we can see in the data, we have categorical variables such as workclass that have string values. Multiple individuals can share the same string value. The types of work include State-gov, Self-emp-not-inc, Private, and so on. Each of these strings is a label for a category. Another example of a column of categories is sex, where the options are Male and Female.Before we get started with decision trees, we need to convert the categorical variables in our data set to numeric variables. This involves assigning a number to each category label, then converting all of the labels in a column to the corresponding numbers.
###Code
col = pandas.Categorical.from_array(income["workclass"])
income["workclass"] = col.codes
print(income["workclass"].head(5))
for name in ["education", "marital_status", "occupation", "relationship", "race", "sex", "native_country", "high_income"]:
col = pandas.Categorical.from_array(income[name])
income[name] = col.codes
###Output
c:\users\jennifer\appdata\local\programs\python\python36-32\lib\site-packages\ipykernel_launcher.py:1: FutureWarning: Categorical.from_array is deprecated, use Categorical instead
"""Entry point for launching an IPython kernel.
c:\users\jennifer\appdata\local\programs\python\python36-32\lib\site-packages\ipykernel_launcher.py:5: FutureWarning: Categorical.from_array is deprecated, use Categorical instead
"""
###Markdown
Using Decision Trees using Scikit LearnWe can use the scikit-learn package to fit a decision tree. We use the DecisionTreeClassifier class for classification problems, and DecisionTreeRegressor for regression problems. The sklearn.tree package includes both of these classes.In this case, we're predicting a binary outcome, so we'll use a classifier.The first step is to train the classifier on the data. We'll use the fit method on a classifier to do this.
###Code
from sklearn.tree import DecisionTreeClassifier
# A list of columns to train with
# We've already converted all columns to numeric
columns = ["age", "workclass", "education_num", "marital_status", "occupation", "relationship", "race", "sex", "hours_per_week", "native_country"]
# Instantiate the classifier
# Set random_state to 1 to make sure the results are consistent
clf = DecisionTreeClassifier(random_state=1)
# We've already loaded the variable "income," which contains all of the income data
clf.fit(income[columns], income["high_income"])
###Output
_____no_output_____
###Markdown
Splitting the dataset to train and test setsNow that we've fit a model, we can make predictions. We'll want to split our data into training and testing sets first. If we don't, we'll be making predictions on the same data that we train our algorithm with. This leads to overfitting, and will make our error appear lower than it is.
###Code
import numpy
import math
# Set a random seed so the shuffle is the same every time
numpy.random.seed(1)
# Shuffle the rows
# This permutes the index randomly using numpy.random.permutation
# Then, it reindexes the dataframe with the result
# The net effect is to put the rows into random order
income = income.reindex(numpy.random.permutation(income.index))
train_max_row = math.floor(income.shape[0] * .8)
train = income.iloc[:train_max_row]
test = income.iloc[train_max_row:]
###Output
_____no_output_____
###Markdown
Evaluating Error Using AUCWhile there are many methods for evaluating error with classification, we'll use AUC. AUC ranges from 0 to 1, so it's ideal for binary classification. The higher the AUC, the more accurate our predictions.We can compute AUC with the roc_auc_score function from sklearn.metrics. This function takes in two parameters:y_true: true labelsy_score: predicted labelsIt then calculates and returns the AUC value.
###Code
from sklearn.metrics import roc_auc_score
clf = DecisionTreeClassifier(random_state=1)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
error = roc_auc_score(test["high_income"], predictions)
print(error)
###Output
0.693465632475
###Markdown
Computing error on the training setThe AUC for the predictions on the testing set is about .694. Let's compare this against the AUC for predictions on the training set to see if the model is overfitting.It's normal for the model to predict the training set better than the testing set. After all, it has full knowledge of that data and the outcomes. However, if the AUC between training set predictions and actual values is significantly higher than the AUC between test set predictions and actual values, it's a sign that the model may be overfitting.
###Code
predictions = clf.predict(train[columns])
print(roc_auc_score(train["high_income"], predictions))
###Output
0.947124450144
###Markdown
Decision Tree OverfittingOur AUC on the training set was .947, and the AUC on the test set was .694. There's no hard and fast rule on when overfitting is occurring, but our model is predicting the training set much better than the test set. Splitting the data into training and testing sets doesn't prevent overfitting -- it just helps us detect and fix it.There are three main ways to combat overfitting:- "Prune" the tree after we build it to remove unnecessary leaves.- Use ensembling to blend the predictions of many trees.- Restrict the depth of the tree while we're building it.While we'll explore all of these, we'll look at the third method first.Limiting tree depth during the building process will result in more general rules. This prevents the tree from overfitting.We can restrict tree depth by adding a few parameters when we initialize the DecisionTreeClassifier class:- max_depth - Globally restricts how deep the tree can go- min_samples_split - The minimum number of rows a node should have before it can be split; if this is set to 2, for example, - then nodes with 2 rows won't be split, and will become leaves instead- min_samples_leaf - The minimum number of rows a leaf must have- min_weight_fraction_leaf - The fraction of input rows a leaf must have- max_leaf_nodes - The maximum number of total leaves; this will cap
###Code
# Decision trees model from the last screen
clf = DecisionTreeClassifier(random_state=1)
clf = DecisionTreeClassifier(min_samples_split=13, random_state=1)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
test_auc = roc_auc_score(test["high_income"], predictions)
train_predictions = clf.predict(train[columns])
train_auc = roc_auc_score(train["high_income"], train_predictions)
print(test_auc)
print(train_auc)
###Output
0.699561714515
0.842143184928
###Markdown
Tweaking parameters to ajust AUCBy setting min_samples_split to 13, we managed to boost the test AUC from .694 to .700. The training set AUC decreased from .947 to .843, showing that the model we built was less overfit to the training set than before:Let's play around with parameters some more.
###Code
clf = DecisionTreeClassifier(random_state=1, min_samples_split=13, max_depth=7)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
test_auc = roc_auc_score(test["high_income"], predictions)
train_predictions = clf.predict(train[columns])
train_auc = roc_auc_score(train["high_income"], train_predictions)
print(test_auc)
print(train_auc)
###Output
0.743634499673
0.748037708309
###Markdown
We just improved the AUC again! The test set AUC increased to .744, while the training set AUC decreased to .748:
###Code
clf = DecisionTreeClassifier(random_state=1, min_samples_split=100, max_depth=2)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
test_auc = roc_auc_score(test["high_income"], predictions)
train_predictions = clf.predict(train[columns])
train_auc = roc_auc_score(train["high_income"], train_predictions)
print(test_auc)
print(train_auc)
###Output
0.655313848188
0.662450804216
###Markdown
Our accuracy went down on the previous cell, relative to the cell before it. This is because we're now underfitting. Underfitting is what occurs when our model is too simple to explain the relationships between the variables. Bias-Variance TradeoffBy artificially restricting the depth of our tree, we prevent it from creating a model that's complex enough to correctly categorize some of the rows. If we don't perform the artificial restrictions, however, the tree becomes too complex, fits quirks in the data that only exist in the training set, and doesn't generalize to new data.This is known as the bias-variance tradeoff. Imagine that we take a random sample of training data and create many models. If the models' predictions for the same row are far apart from each other, we have high variance. Imagine this time that we take a random sample of the training data and create many models. If the models' predictions for the same row are close together but far from the actual value, then we have high bias.High bias can cause underfitting -- if a model is consistently failing to predict the correct value, it may be that it's too simple to model the data faithfully.High variance can cause overfitting. If a model varies its predictions significantly based on small changes in the input data, then it's likely fitting itself to quirks in the training data, rather than making a generalizable model.We call this the bias-variance tradeoff because decreasing one characteristic will usually increase the other. Conclusion - Part 1Let's go over the main advantages and disadvantages of using decision trees. The main advantages of using decision trees is that they're:- Easy to interpret- Relatively fast to fit and make predictions- Able to handle multiple types of data- Able to pick up nonlinearities in data, and usually fairly accurateThe main disadvantage of using decision trees is their tendency to overfit.Decision trees are a good choice for tasks where it's important to be able to interpret and convey why the algorithm is doing what it's doing.The most powerful way to reduce decision tree overfitting is to create ensembles of trees. The random forest algorithm is a popular choice for doing this. In cases where prediction accuracy is the most important consideration, random forests usually perform better. Lets take a look at random forests Introduction to Random ForestsA random forest is a kind of ensemble model. Ensembles combine the predictions of multiple models to create a more accurate final prediction. We'll make a simple ensemble to see how they work.Let's create two decision trees with slightly different parameters:One with min_samples_leaf set to 2One with max_depth set to 5Then, we'll check their accuracies separately
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
columns = ["age", "workclass", "education_num", "marital_status", "occupation", "relationship", "race", "sex", "hours_per_week", "native_country"]
clf = DecisionTreeClassifier(random_state=1, min_samples_leaf=2)
clf.fit(train[columns], train["high_income"])
clf2 = DecisionTreeClassifier(random_state=1, max_depth=5)
clf2.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
print(roc_auc_score(test["high_income"], predictions))
predictions = clf2.predict(test[columns])
print(roc_auc_score(test["high_income"], predictions))
###Output
0.687896422606
0.675985390651
###Markdown
Combining our predictorsWhen we have multiple classifiers making predictions, we can treat each set of predictions as a column in a matrix.Whenever we add more models to our ensemble, we just add more columns to the combined predictions. Ultimately, we don't want this matrix, though -- we want one prediction per row in the training data. There are many ways to get from the output of multiple models to a final vector of predictions. One method is majority voting, in which each classifier gets a "vote," and the most commonly voted value for each row "wins." This only works if there are more than two classifiers (and ideally an odd number, so we don't have to write a rule to break ties). Majority voting is what we applied in the example above.We can use the RandomForestClassifier.predict_proba() method instead, which will predict a probability from 0 to 1 that a given class is the right one for a row. Because 0 and 1 are our two classes, we'll get a matrix containing the number of rows in the income dataframe, and two columns.Each row will correspond to a prediction. The first column is the probability that the prediction is a 0, and the second column is the probability that the prediction is a 1. Each row adds up to 1.If we just take the second column, we get the average value that the classifier would predict for that row. If there's a .9 probability that the correct classification is 1, we can use the .9 as the value the classifier is predicting. This will give us a continuous output in a single vector, instead of just 0 or 1.Then we can add together all of the vectors we get through this method, and divide the sum by the total number of vectors to get the mean prediction made across the entire ensemble for a particular row. Finally, we round off to get a 0 or 1 prediction for the row.If we use the predict_proba() method on both classifiers from the last screen to generate probabilities, take the mean for each row, and then round the results, we'll get ensemble predictions.
###Code
predictions = clf.predict_proba(test[columns])[:,1]
predictions2 = clf2.predict_proba(test[columns])[:,1]
combined = (predictions + predictions2) / 2
rounded = numpy.round(combined)
print(roc_auc_score(test["high_income"], rounded))
###Output
0.715084680404
###Markdown
Why ensembling works The models are approaching the same problem in slightly different ways, and building different trees because we used different parameters for each one. Each tree makes different predictions in different areas. Even though both trees have about the same accuracy, when we combine them, the result is stronger because it leverages the strengths of both approaches.The more "diverse" or dissimilar the models we use to construct an ensemble are, the stronger their combined predictions will be (assuming that all of the models have about the same accuracy). Ensembling a decision tree and a logistic regression model, for example, will result in stronger predictions than ensembling two decision trees with similar parameters. That's because those two models use very different approaches to arrive at their answers.On the other hand, if the models we ensemble are very similar in how they make predictions, ensembling will result in a negligible boost. Introducing Variation with BaggingA random forest is an ensemble of decision trees. If we don't make any modifications to the trees, each tree will be exactly the same, so we'll get no boost when we ensemble them. In order to make ensembling effective, we have to introduce variation into each individual decision tree model.If we introduce variation, each tree will be be constructed slightly differently, and will therefore make different predictions. This variation is what puts the "random" in "random forest."There are two main ways to introduce variation in a random forest -- bagging and random feature subsets. We'll dive into bagging first.In a random forest, we don't train each tree on the entire data set. We train it on a random sample of the data, or a "bag," instead. We perform this sampling with replacement, which means that after we select a row from the data we're sampling, we put the row back in the data so it can be picked again. Some rows from the original data may appear in the "bag" multiple times.Let's use bagging with the first tree we trained.
###Code
# We'll build 10 trees
tree_count = 10
# Each "bag" will have 60% of the number of original rows
bag_proportion = .6
predictions = []
for i in range(tree_count):
# We select 60% of the rows from train, sampling with replacement
# We set a random state to ensure we'll be able to replicate our results
# We set it to i instead of a fixed value so we don't get the same sample in every loop
# That would make all of our trees the same
bag = train.sample(frac=bag_proportion, replace=True, random_state=i)
# Fit a decision tree model to the "bag"
clf = DecisionTreeClassifier(random_state=1, min_samples_leaf=2)
clf.fit(bag[columns], bag["high_income"])
# Using the model, make predictions on the test data
predictions.append(clf.predict_proba(test[columns])[:,1])
combined = numpy.sum(predictions, axis=0) / 10
rounded = numpy.round(combined)
print(roc_auc_score(test["high_income"], rounded))
###Output
0.732996329747
###Markdown
Using the bagging example from the previous screen, we gained some accuracy over a single decision tree. To be exact, we achieved an AUC score of around .733 with bagging, which is an improvement over the AUC score of .688 we got without bagging:Let's go back to the decision tree algorithm we explored two missions ago to explain random feature subsets:First we pick the maximum number of features we want to evaluate each time we split the tree.This has to be less than the total number of columns in the data.Every time we split, we pick a random sample of features from the data.Then we compute the information gain for each feature in our random sample, and pick the one with the highest information gain to split on.We're repeating the same process to select the optimal split for a node, but we'll only evaluate a constrained set of features that we select randomly. This introduces variation into the trees, and makes for more powerful ensembles.We can also repeat the random subset selection process in scikit-learn. We just set the splitter parameter on DecisionTreeClassifier to "random", and the max_features parameter to "auto". If we have N columns, this will pick a subset of features of size N, compute the Gini coefficient for each (this is similar to information gain), and split the node on the best column in the subset.This is essentially the same thing we did on the previous screen, but with far less typing.
###Code
# We'll build 10 trees
tree_count = 10
# Each "bag" will have 60% of the number of original rows
bag_proportion = .6
predictions = []
for i in range(tree_count):
# We select 60% of the rows from train, sampling with replacement
# We set a random state to ensure we'll be able to replicate our results
# We set it to i instead of a fixed value so we don't get the same sample every time
bag = train.sample(frac=bag_proportion, replace=True, random_state=i)
# Fit a decision tree model to the "bag"
clf = DecisionTreeClassifier(random_state=1, min_samples_leaf=2)
clf.fit(bag[columns], bag["high_income"])
# Using the model, make predictions on the test data
predictions.append(clf.predict_proba(test[columns])[:,1])
combined = numpy.sum(predictions, axis=0) / 10
rounded = numpy.round(combined)
print(roc_auc_score(test["high_income"], rounded))
predictions = []
for i in range(tree_count):
# We select 60% of the rows from train, sampling with replacement
# We set a random state to ensure we'll be able to replicate our results
# We set it to i instead of a fixed value so we don't get the same sample every time
bag = train.sample(frac=bag_proportion, replace=True, random_state=i)
# Fit a decision tree model to the "bag"
clf = DecisionTreeClassifier(random_state=1, min_samples_leaf=2, splitter="random", max_features="auto")
clf.fit(bag[columns], bag["high_income"])
# Using the model, make predictions on the test data
predictions.append(clf.predict_proba(test[columns])[:,1])
combined = numpy.sum(predictions, axis=0) / 10
rounded = numpy.round(combined)
print(roc_auc_score(test["high_income"], rounded))
###Output
0.732996329747
0.7345958638
###Markdown
Putting it all togetherScikit-learn has a RandomForestClassifier class and a RandomForestRegressor class that enable us to train and test random forest models quickly.When we instantiate a RandomForestClassifier, we pass in an n_estimators parameter that indicates how many trees to build. While adding more trees usually improves accuracy, it also increases the overall time the model takes to train.
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=5, random_state=1, min_samples_leaf=2)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(test[columns])
print(roc_auc_score(test["high_income"], predictions))
###Output
0.734746139194
###Markdown
Tweaking parameters to increase accuracySimilar to decision trees, we can tweak some of the parameters for random forests, including:- min_samples_leaf- min_samples_split- max_depth- max_leaf_nodesThese parameters apply to the individual trees in the model, and change how they are constructed. There are also parameters specific to the random forest that alter its overall construction:- n_estimators- bootstrap - "Bootstrap aggregation" is another name for bagging; this parameter indicates whether to turn it on (Defaults to True) Reducing Overfitting One of the major advantages of random forests over single decision trees is that they tend to overfit less. Although each individual decision tree in a random forest varies widely, the average of their predictions is less sensitive to the input data than a single tree is. This is because while one tree can construct an incorrect and overfit model, the average of 100 or more trees will be more likely to hone in on the signal and ignore the noise. The signal will be the same across all of the trees, whereas each tree will hone in on the noise differently. This means that the average will discard the noise and keep the signal.In the following code cell, you'll see that we've fit a single decision tree to the training set, and made predictions for both the training and testing sets. The AUC for the training set predictions is .819, while the AUC for the testing set is .714. The fact that the test AUC is much lower than the train AUC indicates that the model is overfitting.
###Code
clf = DecisionTreeClassifier(random_state=1, min_samples_leaf=5)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(train[columns])
print(roc_auc_score(train["high_income"], predictions))
predictions = clf.predict(test[columns])
print(roc_auc_score(test["high_income"], predictions))
clf = RandomForestClassifier(n_estimators=150, random_state=1, min_samples_leaf=5)
clf.fit(train[columns], train["high_income"])
predictions = clf.predict(train[columns])
print(roc_auc_score(train["high_income"], predictions))
predictions = clf.predict(test[columns])
print(roc_auc_score(test["high_income"], predictions))
###Output
0.819257048953
0.713932589928
0.791704729514
0.749887434396
###Markdown
Workshop - Decision Treesthe working of decision trees.
###Code
# Importing libraries in Python
import sklearn.datasets as datasets
import pandas as pd
# Loading the iris dataset
iris=datasets.load_iris()
# Forming the iris dataframe
df=pd.DataFrame(iris.data, columns=iris.feature_names)
print(df.head(5))
y=iris.target
print(y)
###Output
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
###Markdown
Now let us define the Decision Tree Algorithm
###Code
# Defining the decision tree algorithm
from sklearn.tree import DecisionTreeClassifier
dtree=DecisionTreeClassifier()
dtree.fit(df,y)
print('Decision Tree Classifer Created')
###Output
Decision Tree Classifer Created
###Markdown
Let us visualize the Decision Tree to understand it better.
###Code
# Install required libraries
!pip install pydotplus
!apt-get install graphviz -y
# Import necessary libraries for graph viz
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
# Visualize the graph
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data, feature_names=iris.feature_names,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
###Output
_____no_output_____
###Markdown
Read Data
###Code
data = pd.read_csv("datasets/iris.csv")
data.head()
###Output
_____no_output_____
###Markdown
Describe
###Code
data.describe()
data.info()
data.groupby(by="Species").count()
###Output
_____no_output_____
###Markdown
Visualize
###Code
sns.scatterplot(x="SepalLengthCm", y="SepalWidthCm", hue="Species", data=data)
sns.pairplot(data, hue="Species")
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
label_encoder = LabelEncoder()
data["Species"] = label_encoder.fit_transform(data["Species"])
data.head()
data["Species"].value_counts()
data.drop("Id", axis=1, inplace=True)
data.head()
X, y = data.iloc[:, :-1], data.iloc[:, -1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=123)
###Output
_____no_output_____
###Markdown
Model
###Code
import xgboost as xgb
dmatrix_train = xgb.DMatrix(data=X_train, label=y_train)
dmatrix_test = xgb.DMatrix(data=X_test, label=y_test)
param = {'max_depth':3,
'eta':1,
'objective':'multi:softprob',
'num_class':3}
num_round = 5
model = xgb.train(param, dmatrix_train, num_round)
preds = model.predict(dmatrix_test)
preds[:10]
best_preds = np.asarray([np.argmax(line) for line in preds])
best_preds
###Output
_____no_output_____
###Markdown
Metrics
###Code
import numpy as np
from sklearn.metrics import precision_score, recall_score, accuracy_score
print("Precision = {}".format(precision_score(y_test, best_preds, average='macro')))
print("Recall = {}".format(recall_score(y_test, best_preds, average='macro')))
print("Accuracy = {}".format(accuracy_score(y_test, best_preds)))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, best_preds)
sns.heatmap(cm, square=True, annot=True, cbar=False)
###Output
_____no_output_____
###Markdown
Decision Trees and Random Forests
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
###Output
_____no_output_____
###Markdown
Creating a decision tree
###Code
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
###Output
_____no_output_____
###Markdown
Decision Tree Levels
###Code
from helpers_05_08 import visualize_tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_blobs
fig, ax = plt.subplots(1, 4, figsize=(16, 3))
fig.subplots_adjust(left=0.02, right=0.98, wspace=0.1)
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
for axi, depth in zip(ax, range(1, 5)):
model = DecisionTreeClassifier(max_depth=depth)
visualize_tree(model, X, y, ax=axi)
axi.set_title('depth = {0}'.format(depth))
fig.savefig('figures/05.08-decision-tree-levels.png')
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier().fit(X, y)
def visualize_classifier(model, X, y, ax=None, cmap='rainbow'):
ax = ax or plt.gca()
# Plot the training points
ax.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=cmap,
clim=(y.min(), y.max()), zorder=3)
ax.axis('tight')
ax.axis('off')
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# fit the estimator
model.fit(X, y)
xx, yy = np.meshgrid(np.linspace(*xlim, num=200),
np.linspace(*ylim, num=200))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
# Create a color plot with the results
n_classes = len(np.unique(y))
contours = ax.contourf(xx, yy, Z, alpha=0.3,
levels=np.arange(n_classes + 1) - 0.5,
cmap=cmap, clim=(y.min(), y.max()),
zorder=1)
ax.set(xlim=xlim, ylim=ylim)
visualize_classifier(DecisionTreeClassifier(), X, y)
###Output
/home/mattiaguerri/Dropbox/project_X/.venv/lib/python3.6/site-packages/ipykernel_launcher.py:23: UserWarning: The following kwargs were not used by contour: 'clim'
###Markdown
Fit a model and make a prediction
###Code
print(X.shape)
print(y.shape)
model = DecisionTreeClassifier()
model.fit(X, y)
print(X[30, :])
print(0.99 * X[30, :])
print(y[30])
W = np.empty((1, 2))
W[0, :] = 0.99 * X[30, :]
model.predict(W)
###Output
_____no_output_____
###Markdown
Decision trees and over-fitting Random Forests
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
tree = DecisionTreeClassifier()
bag = BaggingClassifier(tree, n_estimators=100, max_samples=0.8,
random_state=1)
bag.fit(X, y)
visualize_classifier(bag, X, y)
###Output
/home/mattiaguerri/Dropbox/project_X/.venv/lib/python3.6/site-packages/ipykernel_launcher.py:23: UserWarning: The following kwargs were not used by contour: 'clim'
###Markdown
Now let's do random forest
###Code
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_classifier(model, X, y);
###Output
/home/mattiaguerri/Dropbox/project_X/.venv/lib/python3.6/site-packages/ipykernel_launcher.py:23: UserWarning: The following kwargs were not used by contour: 'clim'
###Markdown
Random Forest Regression
###Code
rng = np.random.RandomState(42)
x = 10 * rng.rand(200)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * rng.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
print(x.shape)
print(y.shape)
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(200)
forest.fit(x[:, None], y)
xfit = np.linspace(0, 10, 1000)
yfit = forest.predict(xfit[:, None])
ytrue = model(xfit, sigma=0)
plt.errorbar(x, y, 0.3, fmt='o', alpha=0.5)
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
###Output
_____no_output_____
###Markdown
Random Forest for Classifying Digits
###Code
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits.target.shape
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(25):
ax = fig.add_subplot(5, 5, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
# split train and test set
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(digits.data, digits.target,
random_state=0
)
model = RandomForestClassifier(n_estimators=1000)
model.fit(Xtrain, ytrain)
ypred = model.predict(Xtest)
from sklearn import metrics
print(metrics.classification_report(ypred, ytest))
# plot the confusion matrix
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, ypred)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False)
plt.xlabel('true label')
plt.ylabel('predicted label');
###Output
_____no_output_____
###Markdown
Just one tree
###Code
# En este primer modelo , solo implementaremos un Arbol
from sklearn import tree
# Creamos el modelo
oneTree = tree.DecisionTreeClassifier()
# Entrenamos al modelo
oneTree = oneTree.fit(iris.data, iris.target)
# vamos a graficar el arbol obtenido
import graphviz
dot_data = tree.export_graphviz(oneTree,out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
###Markdown
Random Forest
###Code
# Separemos los features de nuestro target
iris_X = iris.data
iris_Y = iris.target
print(iris.feature_names)
iris_X
print(iris.target_names)
iris_Y
# tamaño del dataset
# n * m = 4 * 150
print(iris_X.shape)
# 1 * m = 1 * 150
print(iris_Y.shape)
# Procesando las 3 clase del iris con OneHotEncoder
enc = preprocessing.OneHotEncoder()
enc.fit(iris_Y.reshape(-1, 1))
iris_Y = enc.transform(iris_Y.reshape(-1, 1)).toarray()
iris_Y.shape
# Para visualizacion creamos un dataframe
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df.head()
# Creamos el dataset de test y training
X_train, X_test, Y_train, Y_test = train_test_split(iris_X, iris_Y, test_size = 0.10)
# Definimos el modelo: Random Forest
model = RandomForestClassifier(n_estimators=3 )
# Entrenamos
model.fit(X_train, Y_train)
# Qué variables son más importantes ?
model.feature_importances_
Y_test
Y_train
Y_pred = model.predict(X_test)
Y_pred
###Output
_____no_output_____
###Markdown
###Code
#authenticatiopn script in gcp
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!apt-get install software-properties-common
!apt-get install -y -qq software-properties-common module-init-tools
!apt-get install -y -qq python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
#script for reading data from google drive
!mkdir -p drive
!google-drive-ocamlfuse drive
#we import the important libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
data = pd.read_csv('drive//app//iris.csv')
data.isnull().sum()
data.head()
le = LabelEncoder()
data['Target'] = le.fit_transform(data.Target)
x = data.iloc[ :, [0,1,2,3]].values
y = data.iloc[:, 4].values
train_X,test_X,train_Y,test_Y = train_test_split(x,y, test_size = 0.30, random_state = 0)
sc = StandardScaler()
train_X = sc.fit_transform(train_X)
test_X = sc.transform(test_X)
cl = DecisionTreeClassifier()
cl.fit(train_X,train_Y)
pred = cl.predict(test_X)
score = accuracy_score(test_Y,pred)
score
###Output
_____no_output_____
###Markdown
DECISION TREES**File:** DecisionTrees.ipynb**Course:** Data Science Foundations: Data Mining in Python IMPORT LIBRARIES
###Code
import matplotlib.pyplot as plt # For plotting data
import seaborn as sns # For plotting data
import pandas as pd # For dataframes
from sklearn.model_selection import GridSearchCV # For parameter optimization
from sklearn.tree import DecisionTreeClassifier, plot_tree # For decision trees
from sklearn.metrics import plot_confusion_matrix # Evaluation measure
###Output
_____no_output_____
###Markdown
LOAD AND PREPARE DATALoad the training data `trn` and testing data `tst` from the CSV files in the data directory. Separate the data matrix from the class variable.
###Code
# Imports the training data
trn = pd.read_csv('data/spambase_trn.csv')
# Separates the attributes X0-X56 into X_trn
X_trn = trn.filter(regex='\d')
# Separates the class variable into y_trn
y_trn = trn.y
# Imports the testing data
tst = pd.read_csv('data/spambase_tst.csv')
# Separates the attributes X0-X56 into X_tst
X_tst = tst.filter(regex='\d')
# Separates the class variable into y_tst
y_tst = tst.y
# Class labels
spam = ['Not Spam','Spam']
###Output
_____no_output_____
###Markdown
Look at the first few rows of the training data.
###Code
trn.head()
###Output
_____no_output_____
###Markdown
DECISION TREE: TRAIN MODEL Fit the Training DataA simple method to learn a decision tree is to create a `DecisionTreeClassifier` object and fit it to the training data. The object has a method `score()` that returns the accuracy of the model on the given data. The `DecisionTreeClassifier` requires two parameters:- `criterion`: Can be `entropy` or `gini`- `max_leaf_nodes`: Specifies the size of the tree by explicitly stating the total leaf nodes
###Code
# Creates a DecisionTreeClassifier object
dt = DecisionTreeClassifier(
criterion='entropy',
random_state=0,
max_leaf_nodes=7)
# Fits the decision tree to training data
dt.fit(X_trn,y_trn)
###Output
_____no_output_____
###Markdown
Calculate Mean Accuracy on Training Data
###Code
print(
'Accuracy on training data: '
+ str("{:.2%}".format(dt.score(X_trn, y_trn))))
###Output
_____no_output_____
###Markdown
Optimize the Decision TreeThe `GridSearchCV` object can be used to find the optimal decision tree. This object can be set up by specifying a range of values for `max_leaf_nodes` and the two possible values of `criterion`. In the code below `GridSearchCV` is set up with the default 5 fold cross validation.
###Code
# Defines a DecisionTreeClassifier object
dt = DecisionTreeClassifier(
random_state=1)
# Possible values for max_leaf_nodes to try
param = range(6,45,2)
# Sets up GridSearchCV object and stores it in grid variable
grid = GridSearchCV(
dt,
{'max_leaf_nodes': param,
'criterion': ['entropy','gini']})
# Fits the grid to the training data
grid.fit(X_trn,y_trn)
# Stores the optimum model in best_dt
best_dt = grid.best_estimator_
# Displays the optimum model
best_dt.get_params()
###Output
_____no_output_____
###Markdown
Plot Accuracy Against Various ParametersThe code below creates a plot of accuracy against various values of `max_leaf_nodes`. The `gini` and `entropy` measures are plotted separately.
###Code
# Plots the mean accuracy against max_leaf_nodes
sns.relplot(
data=pd.DataFrame.from_dict(grid.cv_results_, orient='columns'),
kind='line',
x='param_max_leaf_nodes',
y='mean_test_score',
hue='param_criterion'
)
# Draws a vertical red line, where the best model is
plt.axvline(
x=best_dt.max_leaf_nodes,
color='red',
ls='--')
###Output
_____no_output_____
###Markdown
Display the Decision TreeUse `plot_tree()` to display the decision tree. The two class labels have two different shades to distinguish between them.
###Code
# Sets the figure size
fig = plt.figure(figsize=(25, 25))
# Creates a visual display of the model.
# Keep max_depth small for better visualization
t = plot_tree(
best_dt,
class_names=spam,
max_depth=3,
filled=True)
###Output
_____no_output_____
###Markdown
TEST MODELDisplay the confusion matrix for the test data `tst` using the optimum decision tree model, `best_dt`, found in the training phase. A good evaluation measure is the `confusion matrix` that gives the fraction of true positives, true negatives, false positives, and false negatives. Visualize the Confusion MatrixNormalize the scores to display as proportions across rows.
###Code
plot_confusion_matrix(
best_dt, X_tst, y_tst,
display_labels=spam,
normalize='true')
###Output
_____no_output_____
###Markdown
Calculate Mean Accuracy on Testing Data
###Code
print(
'Accuracy on testing data: '
+ str("{:.2%}".format(best_dt.score(X_tst, y_tst))))
###Output
_____no_output_____ |
doc/cookbook/calc_genetic_distance.ipynb | ###Markdown
Genetic distance calculation Fast pairwise distance estimationFor a limited number of evolutionary models a fast implementation is available.
###Code
from cogent3 import available_distances
available_distances()
###Output
_____no_output_____
###Markdown
Computing genetic distances using the `Alignment` objectAbbreviations listed from `available_distances()` can be used as values for the `distance_matrix(calc=)`.
###Code
from cogent3 import load_aligned_seqs
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
dists = aln.distance_matrix(calc="tn93", show_progress=False)
dists
###Output
_____no_output_____
###Markdown
Using the distance calculator directly
###Code
from cogent3 import load_aligned_seqs, get_distance_calculator
aln = load_aligned_seqs('../data/primate_brca1.fasta')
dist_calc = get_distance_calculator("tn93", alignment=aln)
dist_calc
dist_calc.run(show_progress=False)
dists = dist_calc.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
The distance calculation object can provide more information. For instance, the standard errors.
###Code
dist_calc.stderr
###Output
_____no_output_____
###Markdown
Likelihood based pairwise distance estimationThe standard ``cogent3`` likelihood function can also be used to estimate distances. Because these require numerical optimisation they can be significantly slower than the fast estimation approach above.The following will use the F81 nucleotide substitution model and perform numerical optimisation.
###Code
from cogent3 import load_aligned_seqs, get_model
from cogent3.evolve import distance
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
d = distance.EstimateDistances(aln, submodel=get_model("F81"))
d.run(show_progress=False)
dists = d.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
Genetic distance calculation Fast pairwise distance estimationFor a limited number of evolutionary models a fast implementation is available.
###Code
from cogent3 import available_distances
available_distances()
###Output
_____no_output_____
###Markdown
Computing genetic distances using the `Alignment` objectAbbreviations listed from `available_distances()` can be used as values for the `distance_matrix(calc=)`.
###Code
from cogent3 import load_aligned_seqs
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
dists = aln.distance_matrix(calc="tn93", show_progress=False)
dists
###Output
_____no_output_____
###Markdown
Using the distance calculator directly
###Code
from cogent3 import load_aligned_seqs, get_distance_calculator
aln = load_aligned_seqs('../data/primate_brca1.fasta')
dist_calc = get_distance_calculator("tn93", alignment=aln)
dist_calc
dist_calc.run(show_progress=False)
dists = dist_calc.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
The distance calculation object can provide more information. For instance, the standard errors.
###Code
dist_calc.stderr
###Output
_____no_output_____
###Markdown
Likelihood based pairwise distance estimationThe standard ``cogent3`` likelihood function can also be used to estimate distances. Because these require numerical optimisation they can be significantly slower than the fast estimation approach above.The following will use the F81 nucleotide substitution model and perform numerical optimisation.
###Code
from cogent3 import load_aligned_seqs, get_model
from cogent3.evolve import distance
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
d = distance.EstimateDistances(aln, submodel=get_model("F81"))
d.run(show_progress=False)
dists = d.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
Genetic distance calculation Fast pairwise distance estimationFor a limited number of evolutionary models a fast implementation is available.
###Code
from cogent3 import available_distances
available_distances()
###Output
_____no_output_____
###Markdown
Computing genetic distances using the `Alignment` objectAbbreviations listed from `available_distances()` can be used as values for the `distance_matrix(calc=)`.
###Code
from cogent3 import load_aligned_seqs
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
dists = aln.distance_matrix(calc="tn93", show_progress=False)
dists
###Output
_____no_output_____
###Markdown
Using the distance calculator directly
###Code
from cogent3 import load_aligned_seqs, get_distance_calculator
aln = load_aligned_seqs('../data/primate_brca1.fasta')
dist_calc = get_distance_calculator("tn93", alignment=aln)
dist_calc
dist_calc.run(show_progress=False)
dists = dist_calc.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
The distance calculation object can provide more information. For instance, the standard errors.
###Code
dist_calc.stderr
###Output
_____no_output_____
###Markdown
Likelihood based pairwise distance estimationThe standard ``cogent3`` likelihood function can also be used to estimate distances. Because these require numerical optimisation they can be significantly slower than the fast estimation approach above.The following will use the F81 nucleotide substitution model and perform numerical optimisation.
###Code
from cogent3 import load_aligned_seqs, get_model
from cogent3.evolve import distance
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
d = distance.EstimateDistances(aln, submodel=get_model("F81"))
d.run(show_progress=False)
dists = d.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
Genetic distance calculation Fast pairwise distance estimationFor a limited number of evolutionary models a fast implementation is available.
###Code
from cogent3 import available_distances
available_distances()
###Output
_____no_output_____
###Markdown
Computing genetic distances using the `Alignment` objectAbbreviations listed from `available_distances()` can be used as values for the `distance_matrix(calc=)`.
###Code
from cogent3 import load_aligned_seqs
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
dists = aln.distance_matrix(calc="tn93", show_progress=False)
dists
###Output
_____no_output_____
###Markdown
Using the distance calculator directly
###Code
from cogent3 import load_aligned_seqs, get_distance_calculator
aln = load_aligned_seqs('../data/primate_brca1.fasta')
dist_calc = get_distance_calculator("tn93", alignment=aln)
dist_calc
dist_calc.run(show_progress=False)
dists = dist_calc.get_pairwise_distances()
dists
###Output
_____no_output_____
###Markdown
The distance calculation object can provide more information. For instance, the standard errors.
###Code
dist_calc.stderr
###Output
_____no_output_____
###Markdown
Likelihood based pairwise distance estimationThe standard ``cogent3`` likelihood function can also be used to estimate distances. Because these require numerical optimisation they can be significantly slower than the fast estimation approach above.The following will use the F81 nucleotide substitution model and perform numerical optimisation.
###Code
from cogent3 import load_aligned_seqs, get_model
from cogent3.evolve import distance
aln = load_aligned_seqs('../data/primate_brca1.fasta', moltype="dna")
d = distance.EstimateDistances(aln, submodel=get_model("F81"))
d.run(show_progress=False)
dists = d.get_pairwise_distances()
dists
###Output
_____no_output_____ |
notebooks/todo/DopplerTest.ipynb | ###Markdown
Config
###Code
%matplotlib inline
%config InlineBackend.figure_format = "retina"
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Disable annoying font warnings
matplotlib.font_manager._log.setLevel(50)
# Disable theano deprecation warnings
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings(
"ignore", category=matplotlib.MatplotlibDeprecationWarning
)
warnings.filterwarnings("ignore", category=FutureWarning)
warnings.filterwarnings("ignore", category=UserWarning, module="theano")
# Style
plt.style.use("default")
plt.rcParams["savefig.dpi"] = 100
plt.rcParams["figure.dpi"] = 100
plt.rcParams["figure.figsize"] = (12, 4)
plt.rcParams["font.size"] = 14
plt.rcParams["text.usetex"] = False
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams["font.sans-serif"] = ["Liberation Sans"]
plt.rcParams["font.cursive"] = ["Liberation Sans"]
try:
plt.rcParams["mathtext.fallback"] = "cm"
except KeyError:
plt.rcParams["mathtext.fallback_to_cm"] = True
plt.rcParams["mathtext.fallback_to_cm"] = True
del matplotlib
del plt
del warnings
###Output
_____no_output_____
###Markdown
Main
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize
import theano
import theano.tensor as tt
from tqdm.auto import tqdm
import starry
starry.config.quiet = True
def NAdam(cost, params, lr=0.002, b1=0.9, b2=0.999, e=1e-8, sd=0.004):
"""https://github.com/keras-team/keras/blob/master/keras/optimizers.py"""
updates = []
grads = tt.grad(cost, params)
i = theano.shared(np.array(0.0, dtype=theano.config.floatX))
i_t = i + 1.0
# Warm up
m_schedule = theano.shared(np.array(1.0, dtype=theano.config.floatX))
momentum_cache_t = b1 * (1.0 - 0.5 * (tt.pow(0.96, i_t * sd)))
momentum_cache_t_1 = b1 * (1.0 - 0.5 * (tt.pow(0.96, (i_t + 1) * sd)))
m_schedule_new = m_schedule * momentum_cache_t
m_schedule_next = m_schedule * momentum_cache_t * momentum_cache_t_1
updates.append((m_schedule, m_schedule_new))
for p, g in zip(params, grads):
m = theano.shared(p.get_value() * 0.0)
v = theano.shared(p.get_value() * 0.0)
g_prime = g / (1.0 - m_schedule_new)
m_t = b1 * m + (1.0 - b1) * g
m_t_prime = m_t / (1.0 - m_schedule_next)
v_t = b2 * v + (1.0 - b2) * tt.sqr(g)
v_t_prime = v_t / (1.0 - tt.pow(b2, i_t))
m_t_bar = (1.0 - momentum_cache_t) * g_prime + (
momentum_cache_t_1 * m_t_prime
)
updates.append((m, m_t))
updates.append((v, v_t))
p_t = p - lr * m_t_bar / (tt.sqrt(v_t_prime) + e)
new_p = p_t
updates.append((p, new_p))
updates.append((i, i_t))
return updates
###Output
_____no_output_____
###Markdown
One component Generate the spectra:
###Code
map = starry.DopplerMap(15, lazy=False, inc=60, veq=50000, nt=60)
map.load("spot")
ytru = map.amp * np.array(map.y)
D = map.design_matrix(fix_spectrum=True)
B = map._map.design_matrix(theta=np.linspace(0, 360, map.nt, endpoint=False))
B = np.repeat(B, map.nw, axis=0)
f = (D @ ytru) / (B @ ytru)
###Output
_____no_output_____
###Markdown
Solve the approximate linear problem:
###Code
eps = 5e-4
D0, D1 = D[:, 0], D[:, 1:]
B0, B1 = B[:, 0], B[:, 1:]
b = D0
A = D1 - D0.reshape(-1, 1) * B1
y1 = np.linalg.solve(A.T @ A + eps * np.eye(A.shape[1]), A.T @ (f - b))
y = np.append(1.0, y1)
model = (D @ y) / (B @ y)
###Output
_____no_output_____
###Markdown
Here's what we get:
###Code
fig = plt.figure(constrained_layout=True, figsize=(12, 8))
ax = fig.subplot_mosaic(
"""
DDEE
AAAA
BBCC
"""
)
ax["A"].plot(f)
ax["A"].plot(model)
ax["A"].set_xticks([])
ax["A"].set_ylabel("spectrum")
ax["B"].plot(ytru[1:])
ax["B"].plot(y[1:])
ax["B"].set_xticks([])
ax["B"].set_ylabel("ylm coeffs")
ax["C"].plot(B @ ytru)
axt = ax["C"].twinx()
axt.plot(B @ y, "C1")
ax["C"].set_xticks([])
ax["C"].set_ylabel("baseline")
map._map[:, :] = ytru
map._map.show(ax=ax["D"], projection="moll")
ax["D"].axis("on")
ax["D"].set_title("true")
ax["D"].set_yticks([])
ax["D"].set_xticks([])
for s in ["top", "right", "bottom", "left"]:
ax["D"].spines[s].set_visible(False)
map._map[:, :] = y
map._map.show(ax=ax["E"], projection="moll")
ax["E"].axis("on")
ax["E"].set_title("inferred")
ax["E"].set_yticks([])
ax["E"].set_xticks([])
for s in ["top", "right", "bottom", "left"]:
ax["E"].spines[s].set_visible(False)
###Output
_____no_output_____
###Markdown
Run nonlinear optimizer a bit:
###Code
eps = 5e-4
D0, D1 = D[:, 0], D[:, 1:]
B0, B1 = B[:, 0], B[:, 1:]
b = D0
A = D1 - D0.reshape(-1, 1) * B1
y1 = np.linalg.solve(A.T @ A + eps * np.eye(A.shape[1]), A.T @ (f - b))
y = np.append(1.0, y1)
model = (D @ y) / (B @ y)
y1_ = theano.shared(y1)
y_ = tt.concatenate([tt.as_tensor_variable([1.0]), y1_])
model_ = tt.dot(D, y_) / tt.dot(B, y_)
loss_ = tt.sum((f - model_) ** 2)
loss_ += tt.sum(y1_ ** 2 / (5e1) ** 2)
niter = 1000
best_loss = np.inf
best_y1 = y1
loss = np.zeros(niter)
upd = NAdam(loss_, [y1_], lr=0.0002)
train = theano.function([], [y1_, loss_], updates=upd)
for n in tqdm(range(niter)):
y1, loss[n] = train()
if loss[n] < best_loss:
best_loss = loss[n]
best_y1 = y1
print(best_loss)
plt.plot(np.log10(loss));
y1 = best_y1
y = np.append(1.0, y1)
model = (D @ y) / (B @ y)
fig = plt.figure(constrained_layout=True, figsize=(12, 8))
ax = fig.subplot_mosaic(
"""
DDEE
AAAA
BBCC
"""
)
ax["A"].plot(f)
ax["A"].plot(model)
ax["A"].set_xticks([])
ax["A"].set_ylabel("spectrum")
ax["B"].plot(ytru[1:])
ax["B"].plot(y[1:])
ax["B"].set_xticks([])
ax["B"].set_ylabel("ylm coeffs")
ax["C"].plot(B @ ytru)
axt = ax["C"].twinx()
axt.plot(B @ y, "C1")
ax["C"].set_xticks([])
ax["C"].set_ylabel("baseline")
map._map[:, :] = ytru
map._map.show(ax=ax["D"], projection="moll")
ax["D"].axis("on")
ax["D"].set_title("true")
ax["D"].set_yticks([])
ax["D"].set_xticks([])
for s in ["top", "right", "bottom", "left"]:
ax["D"].spines[s].set_visible(False)
map._map[:, :] = y
map._map.show(ax=ax["E"], projection="moll")
ax["E"].axis("on")
ax["E"].set_title("inferred")
ax["E"].set_yticks([])
ax["E"].set_xticks([])
for s in ["top", "right", "bottom", "left"]:
ax["E"].spines[s].set_visible(False)
###Output
_____no_output_____
###Markdown
Two components, one uniform
###Code
map = starry.DopplerMap(
15, lazy=False, inc=60, veq=50000, nt=20, nc=2, oversample=5
)
map.load(["s", "o"])
map.amp = 0.5, 0.5
np.random.seed(0)
mu = np.random.uniform(low=map.wav[0], high=map.wav[-1], size=map.nc)
sig = 0.025
dw = map.wav0.reshape(1, -1) - mu.reshape(-1, 1)
map.spectrum = 1.0 - np.exp(-0.5 * dw ** 2 / sig ** 2)
ytru = (map.amp.reshape(-1, 1) * np.array(map.y.T)).reshape(-1)
D = map.design_matrix(fix_spectrum=True)
B = map._map.design_matrix(theta=np.linspace(0, 360, map.nt, endpoint=False))
B = np.tile(B, [1, map.nc])
B = np.repeat(B, map.nw, axis=0)
f = (D @ ytru) / (B @ ytru)
map.show()
plt.plot(f);
eps1 = 1e-2
idx = np.zeros(map.nc * map.Ny, dtype=bool)
idx[0 :: map.Ny] = 1
D0 = D[:, idx]
D1 = D[:, ~idx]
B0 = B[:, idx]
B1 = B[:, ~idx]
y0 = np.ones(map.nc) / map.nc
b = 2 * D0 @ y0 - (D0 @ y0) * (B0 @ y0)
A = 2 * D1 - (D0 @ y0).reshape(-1, 1) * B1 - (B0 @ y0).reshape(-1, 1) * D1
y1 = np.linalg.solve(A.T @ A + eps1 * np.eye(A.shape[1]), A.T @ (f - b))
y = np.zeros(map.nc * map.Ny)
y[idx] = y0
y[~idx] = y1
plt.plot(ytru)
plt.plot(y)
map._map[:, :] = y.reshape(2, -1)[0]
map._map.show(projection="moll")
map._map[:, :] = y.reshape(2, -1)[1]
map._map.show(projection="moll")
# Guesses and regularization
y0 = np.ones(map.nc) / map.nc
eps1 = 1e-2
eps0 = 0
niter = 100
# Iterate
err = np.zeros(niter)
best_err = np.inf
best_y = np.zeros(map.nc * map.Ny)
best_model = np.zeros_like(f)
idx = np.zeros(map.nc * map.Ny, dtype=bool)
idx[0 :: map.Ny] = 1
D0 = D[:, idx]
D1 = D[:, ~idx]
B0 = B[:, idx]
B1 = B[:, ~idx]
n = 0
while n < niter:
# Solve for y1
# fp = A @ y1 + b
b = 2 * D0 @ y0 - (D0 @ y0) * (B0 @ y0)
A = 2 * D1 - (D0 @ y0).reshape(-1, 1) * B1 - (B0 @ y0).reshape(-1, 1) * D1
y1 = np.linalg.solve(A.T @ A + eps1 * np.eye(A.shape[1]), A.T @ (f - b))
y = np.zeros(map.nc * map.Ny)
y[idx] = y0
y[~idx] = y1
model = (D @ y) / (B @ y)
err[n] = np.sum((f - model) ** 2)
if err[n] < best_err:
best_err = err[n]
best_y = y
best_model = model
n += 1
# Solve for y0
# fp = A @ y0 + b
A = D0 - (B1 @ y1).reshape(-1, 1) * D0 - (D1 @ y1).reshape(-1, 1) * B0
b = 2 * D1 @ y1
y0 = np.linalg.solve(A.T @ A + eps0 * np.eye(A.shape[1]), A.T @ (f - b))
y = np.zeros(map.nc * map.Ny)
y[idx] = y0
y[~idx] = y1
model = (D @ y) / (B @ y)
err[n] = np.sum((f - model) ** 2)
if err[n] < best_err:
best_err = err[n]
best_y = y
best_model = model
n += 1
plt.plot(err)
plt.plot(f)
plt.plot(best_model)
plt.plot(ytru)
plt.plot(best_y);
map._map[:, :] = y.reshape(2, -1)[0]
map._map.show(projection="moll")
map._map[:, :] = y.reshape(2, -1)[1]
map._map.show(projection="moll")
###Output
_____no_output_____
###Markdown
one component
###Code
map = starry.DopplerMap(15, lazy=False, inc=60, veq=50000, nt=20)
map.load("spot")
map.amp = 1
ytru = map.amp * np.array(map.y)
D = map.design_matrix(fix_spectrum=True)
B = map._map.design_matrix(theta=np.linspace(0, 360, map.nt, endpoint=False))
B = np.repeat(B, map.nw, axis=0)
f = (D @ ytru) / (B @ ytru)
# Linearization:
# f ~ 2 * D0 * y0 - (D0 * y0) * (B0 * y0) - (D0 * y0) * (B1 @ y1) + 2 * D1 @ y1 - (D1 @ y1) * (B0 * y0)
plt.plot(f)
# Guesses and regularization
y0 = 1.0
eps = 1e-4
niter = 100
# Iterate
err = np.zeros(niter)
best_err = np.inf
best_y = np.zeros(map.Ny)
best_model = np.zeros_like(f)
D0, D1 = D[:, 0], D[:, 1:]
B0, B1 = B[:, 0], B[:, 1:]
n = 0
while n < niter:
# In terms of y1
# fp = A @ y1 + b
b = (2 * D0) * y0 - (D0 * B0) * (y0 ** 2)
A = (
2 * D1
- (D0 * y0).reshape(-1, 1) * B1
- ((B0 * y0).reshape(-1, 1) * D1)
)
y1 = np.linalg.solve(A.T @ A + eps * np.eye(A.shape[1]), A.T @ (f - b))
y = np.append(y0, y1)
model = (D @ y) / (B @ y)
err[n] = np.sum((f - model) ** 2)
if err[n] < best_err:
best_err = err[n]
best_y = y
best_model = model
n += 1
# In terms of y0
# fp = a * y0 ** 2 + b * y0 + c
a = -(D0 * B0)
b = 2 * D0 - B1 @ y1 * D0 - D1 @ y1 * B0
c = 2 * D1 @ y1
y0 = np.nanmean((-b + np.sqrt(b ** 2 - 4 * a * (c - f))) / (2 * a))
y = np.append(y0, y1)
model = (D @ y) / (B @ y)
err[n] = np.sum((f - model) ** 2)
if err[n] < best_err:
best_err = err[n]
best_y = y
best_model = model
n += 1
plt.plot(err)
plt.plot(f)
plt.plot(best_model)
plt.plot(ytru)
plt.plot(best_y)
###Output
_____no_output_____ |
ETL-with-spark.ipynb | ###Markdown
import zipfileurllib.request.urlretrieve("https://s3.amazonaws.com/nyc-tlc/misc/taxi_zones.zip", "/tmp/taxi_zones.zip")with zipfile.ZipFile("/tmp/taxi_zones.zip","r") as zip_ref: zip_ref.extractall("/tmp/shape")
###Code
os.listdir('/tmp/shape')
#read in our raw dataset
# shpt1 = sc.binaryFiles('s3://data-etl-o-original-raw/zone/taxi_zones_shape.zip')
# shpt1.saveAsTextFile('/tmp/taxi_zones_shape_1.zip')
sc.install_pypi_package("pandas")
sc.install_pypi_package("pyshp")
sc.install_pypi_package("shapely")
sc.install_pypi_package("descartes")
import pandas as pd
import numpy as np
import urllib.request
import zipfile
import random
import itertools
import math
import shapefile
from shapely.geometry import Polygon
from descartes.patch import PolygonPatch
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
src format:Record 0:[1, 0.116357453189, 0.0007823067885, 'Newark Airport', 1, 'EWR']target format{'OBJECTID': 1, 'Shape_Leng': 0.116357453189, 'Shape_Area': 0.0007823067885, 'zone': 'Newark Airport', 'LocationID': 1, 'borough': 'EWR'}
###Code
shp_attr = [dict(zip(fields_name, attr)) for attr in attributes]
for sattr in shp_attr:
sattr
def get_lat_lon(sf,shp_dic):
content = []
for sr in sf.shapeRecords():
shape = sr.shape
rec = sr.record
loc_id = rec[shp_dic['LocationID']]
x = (shape.bbox[0]+shape.bbox[2])/2
y = (shape.bbox[1]+shape.bbox[3])/2
content.append((loc_id, x, y))
return pd.DataFrame(content, columns=["LocationID", "longitude", "latitude"])
sf = shapefile.Reader("/tmp/shape/taxi_zones.shp")
fields_name = [field[0] for field in sf.fields[1:]]
shp_dic = dict(zip(fields_name, list(range(len(fields_name)))))
attributes = sf.records()
shp_attr = [dict(zip(fields_name, attr)) for attr in attributes]
df_loc = pd.DataFrame(shp_attr).join(get_lat_lon(sf,shp_dic).set_index("LocationID"), on="LocationID")
## df_loc.head()
spark_dfl = spark.createDataFrame(df_loc)
spark_dfl.show()
###Output
_____no_output_____
###Markdown
draw zone map by plot
###Code
def draw_zone_map(ax, sf, heat={}, text=[], arrows=[]):
continent = [235/256, 151/256, 78/256]
ocean = (89/256, 171/256, 227/256)
theta = np.linspace(0, 2*np.pi, len(text)+1).tolist()
ax.set_facecolor(ocean)
# colorbar
if len(heat) != 0:
norm = mpl.colors.Normalize(vmin=fmin(heat.values()),vmax=max(heat.values())) #norm = mpl.colors.LogNorm(vmin=1,vmax=max(heat))
cm=plt.get_cmap('Reds')
sm = plt.cm.ScalarMappable(cmap=cm, norm=norm)
sm.set_array([])
plt.colorbar(sm, ticks=np.linspace(min(heat.values()),max(heat.values()),8),
boundaries=np.arange(min(heat.values())-10,max(heat.values())+10,.1))
for sr in sf.shapeRecords():
shape = sr.shape
rec = sr.record
loc_id = rec[shp_dic['LocationID']]
zone = rec[shp_dic['zone']]
if len(heat) == 0:
col = continent
else:
if loc_id not in heat:
R,G,B,A = cm(norm(0))
else:
R,G,B,A = cm(norm(heat[loc_id]))
col = [R,G,B]
# check number of parts (could use MultiPolygon class of shapely?)
nparts = len(shape.parts) # total parts
if nparts == 1:
polygon = Polygon(shape.points)
patch = PolygonPatch(polygon, facecolor=col, alpha=1.0, zorder=2)
ax.add_patch(patch)
else: # loop over parts of each shape, plot separately
for ip in range(nparts): # loop over parts, plot separately
i0 = shape.parts[ip]
if ip < nparts-1:
i1 = shape.parts[ip+1]-1
else:
i1 = len(shape.points)
polygon = Polygon(shape.points[i0:i1+1])
patch = PolygonPatch(polygon, facecolor=col, alpha=1.0, zorder=2)
ax.add_patch(patch)
x = (shape.bbox[0]+shape.bbox[2])/2
y = (shape.bbox[1]+shape.bbox[3])/2
if (len(text) == 0 and rec[shp_dic['Shape_Area']] > 0.0001):
plt.text(x, y, str(loc_id), horizontalalignment='center', verticalalignment='center')
elif len(text) != 0 and loc_id in text:
#plt.text(x+0.01, y-0.01, str(loc_id), fontsize=12, color="white", bbox=dict(facecolor='black', alpha=0.5))
eta_x = 0.05*np.cos(theta[text.index(loc_id)])
eta_y = 0.05*np.sin(theta[text.index(loc_id)])
ax.annotate("[{}] {}".format(loc_id, zone), xy=(x, y), xytext=(x+eta_x, y+eta_y),
bbox=dict(facecolor='black', alpha=0.5), color="white", fontsize=12,
arrowprops=dict(facecolor='black', width=3, shrink=0.05))
if len(arrows)!=0:
for arr in arrows:
ax.annotate('', xy = arr['dest'], xytext = arr['src'], size = arr['cnt'],
arrowprops=dict(arrowstyle="fancy", fc="0.6", ec="none"))
# display
limits = get_boundaries(sf)
plt.xlim(limits[0], limits[1])
plt.ylim(limits[2], limits[3])
###Output
_____no_output_____
###Markdown
Draw Borough region
###Code
# %matplotlib inline
def draw_region_map(ax, sf, heat={}):
continent = [235/256, 151/256, 78/256]
ocean = (89/256, 171/256, 227/256)
reg_list={'Staten Island':1, 'Queens':2, 'Bronx':3, 'Manhattan':4, 'EWR':5, 'Brooklyn':6}
reg_x = {'Staten Island':[], 'Queens':[], 'Bronx':[], 'Manhattan':[], 'EWR':[], 'Brooklyn':[]}
reg_y = {'Staten Island':[], 'Queens':[], 'Bronx':[], 'Manhattan':[], 'EWR':[], 'Brooklyn':[]}
# colorbar
if len(heat) != 0:
norm = mpl.colors.Normalize(vmin=math.sqrt(min(heat.values())), vmax=math.sqrt(max(heat.values()))) #norm = mpl.colors.LogNorm(vmin=1,vmax=max(heat))
cm=plt.get_cmap('Reds')
#sm = plt.cm.ScalarMappable(cmap=cm, norm=norm)
#sm.set_array([])
#plt.colorbar(sm, ticks=np.linspace(min(heat.values()),max(heat.values()),8), \
# boundaries=np.arange(min(heat.values())-10,max(heat.values())+10,.1))
ax.set_facecolor(ocean)
for sr in sf.shapeRecords():
shape = sr.shape
rec = sr.record
reg_name = rec[shp_dic['borough']]
if len(heat) == 0:
norm = mpl.colors.Normalize(vmin=1,vmax=6) #norm = mpl.colors.LogNorm(vmin=1,vmax=max(heat))
cm=plt.get_cmap('Pastel1')
R,G,B,A = cm(norm(reg_list[reg_name]))
col = [R,G,B]
else:
R,G,B,A = cm(norm(math.sqrt(heat[reg_name])))
col = [R,G,B]
# check number of parts (could use MultiPolygon class of shapely?)
nparts = len(shape.parts) # total parts
if nparts == 1:
polygon = Polygon(shape.points)
patch = PolygonPatch(polygon, facecolor=col, alpha=1.0, zorder=2)
ax.add_patch(patch)
else: # loop over parts of each shape, plot separately
for ip in range(nparts): # loop over parts, plot separately
i0 = shape.parts[ip]
if ip < nparts-1:
i1 = shape.parts[ip+1]-1
else:
i1 = len(shape.points)
polygon = Polygon(shape.points[i0:i1+1])
patch = PolygonPatch(polygon, facecolor=col, alpha=1.0, zorder=2)
ax.add_patch(patch)
reg_x[reg_name].append((shape.bbox[0]+shape.bbox[2])/2)
reg_y[reg_name].append((shape.bbox[1]+shape.bbox[3])/2)
for k in reg_list:
if len(heat)==0:
plt.text(np.mean(reg_x[k]), np.mean(reg_y[k]), k, horizontalalignment='center', verticalalignment='center',
bbox=dict(facecolor='black', alpha=0.5), color="white", fontsize=12)
else:
plt.text(np.mean(reg_x[k]), np.mean(reg_y[k]), "{}\n({}K)".format(k, heat[k]/1000), horizontalalignment='center',
verticalalignment='center',bbox=dict(facecolor='black', alpha=0.5), color="white", fontsize=12)
# display
limits = get_boundaries(sf)
plt.xlim(limits[0], limits[1])
plt.ylim(limits[2], limits[3])
###Output
_____no_output_____
###Markdown
get boundaries of zone
###Code
# %matplotlib inline
def get_boundaries(sf):
lat, lon = [], []
for shape in list(sf.iterShapes()):
lat.extend([shape.bbox[0], shape.bbox[2]])
lon.extend([shape.bbox[1], shape.bbox[3]])
margin = 0.01 # buffer to add to the range
lat_min = min(lat) - margin
lat_max = max(lat) + margin
lon_min = min(lon) - margin
lon_max = max(lon) + margin
return lat_min, lat_max, lon_min, lon_max
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(15,8))
ax = plt.subplot(1, 2, 1)
ax.set_title("Borough Area in NYC")
draw_region_map(ax, sf)
ax = plt.subplot(1, 2, 2)
ax.set_title("Zones in NYC")
draw_zone_map(ax, sf)
%matplot plt
df.select(['VendorID','PULocationID', 'DOLocationID', 'p_count', 'lpep_pickup_datetime', 'lpep_dropoff_datetime', 'duration','total_amount','trip_distance','minute_rate','average_speed']) \
.show(10)
# rename passenger_count to pcount shortly
df = df.withColumnRenamed('passenger_count', 'p_count')
df.select(['VendorID','PULocationID', 'DOLocationID', 'p_count', 'lpep_pickup_datetime', 'lpep_dropoff_datetime', 'duration','total_amount','trip_distance','minute_rate','average_speed']) \
.show(10)
spark_dfl.show()
dfz.show(10)
dfz.columns,df.columns,spark_dfl.columns
###Output
_____no_output_____
###Markdown
dfjoin1 = df.join(dfz, df['PULocationID'].eqNullSafe(dfz['LocationID']),'left') \ .withColumnRenamed('Borough','FromBorough').withColumnRenamed('Zone','FromZone') \ .drop('LocationID').drop('service_zone') \ .orderBy('count', ascending=False)dfjoin1.show(10) for business trend to Q1: Which zone have most pickups and drop-offs?
###Code
df_pu = pd.read_sql_query('SELECT PULocationID AS LocationID, count(*) AS PUcount \
FROM table_record \
GROUP BY PULocationID', nyc_database)
df_do = pd.read_sql_query('SELECT DOLocationID AS LocationID, count(*) AS DOcount \
FROM table_record \
GROUP BY DOLocationID', nyc_database)
df.columns
toplo = df.groupBy('PULocationID').count().orderBy('count', ascending=False).limit(10)
## count().orderBy('count', ascending=False).limit(10)
df_pu = df.select(f.col("PULocationID").alias("LocationID")).groupby('LocationID').count().withColumnRenamed('count', 'pu_count')
df_do = df.select(f.col("DOLocationID").alias("LocationID")).groupby('LocationID').count().withColumnRenamed('count', 'do_count')
df_pu.show(2)
df_do.show(2)
## use taxi zone talbe
joined1 = df_pu.join(df_do, 'LocationID', 'left').withColumn('total_count', df_pu.pu_count + df_do.do_count)
joined1.show(5)
dfz.show(5)
joined2 = joined1.join(dfz,'LocationID', 'left')
joined2.show(10)
df_putp5 = joined2.select(['pu_count','zone','borough']).orderby('pu_count', ascending=False).limit(5)
df_putp5.show()
joined2.select('*').limit(10).show(10)
###Output
_____no_output_____ |
scope/volumentric_approach/data_extraction.ipynb | ###Markdown
Cleaning Timings:- Negative timings are due to missing or erroneous CT time / onset time
###Code
print(data_df['TimeOnsetCT'].isnull().sum())
(data_df['TimeOnsetCT'] < 0).sum()
data_df = data_df[(data_df['TimeOnsetCT'].isnull() == False) & (data_df['TimeOnsetCT'] > 0)]
data_df['TimeOnsetCT'].describe()
###Output
_____no_output_____
###Markdown
Cleaning Volumetric Perfusion Parameters
###Code
print(data_df['CBF'].isnull().sum())
print(data_df['T4'].isnull().sum())
print(data_df['T6'].isnull().sum())
print(data_df['T8'].isnull().sum())
print(data_df['T10'].isnull().sum())
data_df = data_df[data_df['CBF'].isnull() == False]
print(data_df['NIH on admission'].isnull().sum())
data_df = data_df[data_df['NIH on admission'].isnull() == False]
data_df['NIH on admission'].describe()
data_df[selected_variables].isnull().sum(axis = 0)
###Output
_____no_output_____
###Markdown
Because of too many missing values for BMI, this variable will be droppedFor missing variables of Patient history, absence is considered default
###Code
selected_variables.remove('BMI')
data_df.loc[data_df['Anticoagulants'].isnull(), 'Anticoagulants'] = 'no'
data_df.loc[data_df['MedHist Stroke'].isnull(), 'MedHist Stroke'] = 'no'
data_df.loc[data_df['MedHist TIA'].isnull(), 'MedHist TIA'] = 'no'
data_df.loc[data_df['MedHist ICH'].isnull(), 'MedHist ICH'] = 'no'
data_df.loc[data_df['Prestroke disability (Rankin)'].isnull(), 'Prestroke disability (Rankin)'] = 0
###Output
_____no_output_____
###Markdown
Curate clinical variables Curate Referral variable
###Code
data_df.loc[data_df['Referral'] == 'Emergency service (144)', 'Referral'] = 'Emergency service'
data_df.loc[data_df['Referral'] == 'SAMU', 'Referral'] = 'Emergency service'
data_df.loc[data_df['Referral'] == 'General practionner', 'Referral'] = 'general practitioner'
data_df.loc[data_df['Referral'] == 'in hospital stroke', 'Referral'] = 'in-hospital event'
data_df['Referral'] = data_df['Referral'].str.lower()
data_df['Referral'].value_counts()
selected_data_df = data_df[selected_variables]
###Output
_____no_output_____
###Markdown
Strip whitespaces in all medical history columns
###Code
filter_col = [col for col in selected_data_df if col.startswith('MedHist')]
selected_data_df[filter_col] = selected_data_df[filter_col].apply(lambda column: column.str.strip())
selected_data_df['MedHist Hyperlipidemia'].value_counts()
###Output
/Users/jk1/opt/anaconda3/envs/scope/lib/python3.8/site-packages/pandas/core/frame.py:3188: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self[k1] = value[k2]
###Markdown
Convert categorical variables to integers*Note: missing variables are encoded as -1 -> there are then removed again*
###Code
char_cols = selected_data_df.dtypes.pipe(lambda x: x[x == 'object']).index
# Ignore onset known column
char_cols = char_cols.drop('Time of symptom onset known')
label_mapping = {}
for c in char_cols:
selected_data_df[c], label_mapping[c] = pd.factorize(selected_data_df[c])
selected_data_df.loc[selected_data_df[c] < 0, c] = np.nan
selected_data_df['Time of symptom onset known'].value_counts()
onset_known_df = selected_data_df[selected_data_df['Time of symptom onset known'] == 'yes']
# The following subpopulation is probably suboptimally selected as only patients with an estimated onset where selected above
onset_unknown_df = selected_data_df[selected_data_df['Time of symptom onset known'] == 'no']
wake_up_df = selected_data_df[selected_data_df['Time of symptom onset known'] == 'wake up']
curated_data_path = os.path.join(os.path.dirname(data_path), 'curated_onset_known_volumetric_data.xlsx')
onset_known_df.to_excel(curated_data_path)
###Output
_____no_output_____ |
task2_s82hdj/task2.ipynb | ###Markdown
RandomForest
###Code
# RandomForest naive w/o any tuning
model = RandomForestClassifier()
model.fit(X_train_transformed, y_train)
y_pred = model.predict(X_test_transformed)
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("accuracy: {:.3f}%".format(accuracy*100))
# Tune RandomForest
## Grid searchmodel = RandomForestClassifier()
model = RandomForestClassifier()
n_estimators = [50, 100, 200]
max_depth = [5, 10, 20, None]
min_samples_split = [2, 4, 8, 16, 32]
min_samples_leaf = [1, 2, 4, 8]
max_features = ['log2', 'sqrt', None]
param_grid = dict(n_estimators=n_estimators, max_features=max_features, max_depth=max_depth, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf)
print(param_grid)
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)
grid_search = GridSearchCV(model, param_grid, scoring="neg_log_loss", n_jobs=-1, cv=kfold, verbose=1)
grid_result = grid_search.fit(X_train_transformed, y_train)
# summarize results
print("Best: {} using {}".format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("{} ({}) with: {}".format(mean, stdev, param))# plot
# RandomForest 10-fold cross val /w manual tuning
model = RandomForestClassifier(max_depth=None, max_features='log2', min_samples_leaf=1, min_samples_split=2, n_estimators=200)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X_train_transformed, y_train, cv=kfold)
print("accuracy: {:.3f}% , std dev: {:.3f}%".format(results.mean()*100, results.std()*100))
# Generate prediction RandomForest
model = RandomForestClassifier(max_depth=None, max_features='log2', min_samples_leaf=1, min_samples_split=2, n_estimators=200)
model.fit(X_transformed, y)
y_predict = model.predict(X_challenge_transformed)
# Write prediction to output file
filename = 'prediction.csv'
result = DataFrame(y_predict.astype(np.int32))
result.index = result.index + len(data)
result.to_csv(filename, index_label='Id', header=['y'])
###Output
_____no_output_____
###Markdown
XGBoost
###Code
# Plot feature importance
model = XGBClassifier()
model.fit(X_train_transformed, y_train)
# feature importance
print(model.feature_importances_)
# plot
pyplot.bar(range(len(model.feature_importances_)), model.feature_importances_)
pyplot.show()
# XGBoost naive w/o any tuning
model = XGBClassifier()
model.fit(X_train_transformed, y_train)
y_pred = model.predict(X_test_transformed)
predictions = [round(value) for value in y_pred]
accuracy = accuracy_score(y_test, predictions)
print("accuracy: {:.3f}%".format(accuracy*100))
# Tune XGBoost
## Grid searchmodel = XGBClassifier()
model = XGBClassifier()
n_estimators = [50, 100, 200, 400]
max_depth = [1, 2, 4, 8, 16, 32]
param_grid = dict(max_depth=max_depth, n_estimators=n_estimators)
print(param_grid)
kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)
grid_search = GridSearchCV(model, param_grid, scoring="neg_log_loss", n_jobs=-1, cv=kfold, verbose=1)
grid_result = grid_search.fit(X_train_transformed, y_train)
# Print grid search results
print("Best: {} using {}".format(grid_result.best_score_, grid_result.best_params_))
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("{} ({}) with: {}".format(mean, stdev, param))# plot
# XGBoost 10-fold cross val /w tuning
model = XGBClassifier(max_depth=4, n_estimators=100, learning_rate=0.01, subsample=1.0, gamma=1.0, colsample_bytree=1.0, reg_lambda=0.1, reg_alpha=0.5, min_child_weight=1, silent=False)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X_train_transformed, y_train, cv=kfold)
print("CV: accuracy: {:.3f}% , std dev: {:.3f}%".format(results.mean()*100, results.std()*100))
# Generate prediction XGBoost
model = XGBClassifier(max_depth=4, n_estimators=100, learning_rate=0.01, subsample=1.0, gamma=1.0, colsample_bytree=1.0, reg_lambda=0.1, reg_alpha=0.5, min_child_weight=1, silent=False)
model.fit(X_transformed, y)
y_predict = model.predict(X_challenge_transformed)
# Write prediction to output file
filename = 'prediction_XGBoost.csv'
result = DataFrame(y_predict.astype(np.int32))
result.index = result.index + len(data)
result.to_csv(filename, index_label='Id', header=['y'])
###Output
/usr/local/lib/python3.6/site-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
if diff:
###Markdown
Deep Learning with Keras
###Code
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
y_train_categorical = keras.utils.to_categorical(y_train, num_classes=3)
y_test_categorical = keras.utils.to_categorical(y_test, num_classes=3)
y_categorical = keras.utils.to_categorical(y, num_classes=3)
model = Sequential()
model.add(Dense(128, activation='relu', input_dim=32))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
model.fit(X_train_transformed, y_train_categorical,
epochs=100,
batch_size=64)
score = model.evaluate(X_test_transformed, y_test_categorical, batch_size=64)
# Generate prediction
y_predict = model.predict(X_challenge_transformed)
# Write prediction to output file
filename = 'prediction_MLP.csv'
result = DataFrame(np.argmax(y_predict, axis=1))
result.index = result.index + len(data)
result.to_csv(filename, index_label='Id', header=['y'])
###Output
_____no_output_____
###Markdown
DeepSuperLearner
###Code
from sklearn.ensemble.forest import ExtraTreesClassifier as ExtremeRandomizedTrees
from sklearn.neighbors import KNeighborsClassifier as kNearestNeighbors
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble.forest import RandomForestClassifier
from xgboost.sklearn import XGBClassifier
from deepSuperLearner import *
from sklearn.model_selection import train_test_split
import numpy as np
ERT_learner = ExtremeRandomizedTrees(n_estimators=200, max_depth=None, max_features=1)
kNN_learner = kNearestNeighbors(n_neighbors=11)
LR_learner = LogisticRegression()
RFC_learner = RandomForestClassifier(n_estimators=200, max_depth=None)
XGB_learner = XGBClassifier(n_estimators=200, max_depth=3, learning_rate=1.)
Base_learners = {'ExtremeRandomizedTrees':ERT_learner, 'kNearestNeighbors':kNN_learner, 'LogisticRegression':LR_learner,
'RandomForestClassifier':RFC_learner, 'XGBClassifier':XGB_learner}
np.random.seed(100)
DSL_learner = DeepSuperLearner(Base_learners)
DSL_learner.fit(X_train_transformed, y_train, max_iterations=40, sample_weight=None)
y_pred = DSL_learner.predict(X_test_transformed)
predictions = np.argmax(y_pred, axis=1)
accuracy = accuracy_score(y_test, predictions)
print("accuracy: {:.3f}%".format(accuracy*100))
# Generate prediction
y_predict = DSL_learner.predict(X_challenge_transformed)
# Write prediction to output file
filename = 'prediction_DSL.csv'
result = DataFrame(np.argmax(y_predict, axis=1))
result.index = result.index + len(data)
result.to_csv(filename, index_label='Id', header=['y'])
###Output
_____no_output_____ |
NLP/Sequences/2/C3_W2_lecture_notebook_GRU.ipynb | ###Markdown
Creating a GRU model using Trax: Ungraded Lecture Notebook For this lecture notebook you will be using Trax's layers. These are the building blocks for creating neural networks with Trax.
###Code
import trax
from trax import layers as tl
###Output
INFO:tensorflow:tokens_length=568 inputs_length=512 targets_length=114 noise_density=0.15 mean_noise_span_length=3.0
###Markdown
Trax allows to define neural network architectures by stacking layers (similarly to other libraries such as Keras). For this the `Serial()` is often used as it is a combinator that allows to stack layers serially using function composition.Next you can see a simple vanilla NN architecture containing 1 hidden(dense) layer with 128 cells and output (dense) layer with 10 cells on which we apply the final layer of logsoftmax.
###Code
mlp = tl.Serial(
tl.Dense(128),
tl.Relu(),
tl.Dense(10),
tl.LogSoftmax()
)
###Output
_____no_output_____
###Markdown
Each of the layers within the `Serial` combinator layer is considered a sublayer. Notice that unlike similar libraries, **in Trax the activation functions are considered layers.** To know more about the `Serial` layer check the docs [here](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.combinators.Serial).You can try printing this object:
###Code
print(mlp)
###Output
Serial[
Dense_128
Relu
Dense_10
LogSoftmax
]
###Markdown
Printing the model gives you the exact same information as the model's definition itself.By just looking at the definition you can clearly see what is going on inside the neural network. Trax is very straightforward in the way a network is defined, that is one of the things that makes it awesome! GRU MODEL To create a `GRU` model you will need to be familiar with the following layers (Documentation link attached with each layer name): - [`ShiftRight`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.attention.ShiftRight) Shifts the tensor to the right by padding on axis 1. The `mode` should be specified and it refers to the context in which the model is being used. Possible values are: 'train', 'eval' or 'predict', predict mode is for fast inference. Defaults to "train". - [`Embedding`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.Embedding) Maps discrete tokens to vectors. It will have shape `(vocabulary length X dimension of output vectors)`. The dimension of output vectors (also called `d_feature`) is the number of elements in the word embedding. - [`GRU`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.rnn.GRU) The GRU layer. It leverages another Trax layer called [`GRUCell`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.rnn.GRUCell). The number of GRU units should be specified and should match the number of elements in the word embedding. If you want to stack two consecutive GRU layers, it can be done by using python's list comprehension. - [`Dense`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.Dense) Vanilla Dense layer. - [`LogSoftMax`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.LogSoftmax) Log Softmax function.Putting everything together the GRU model will look like this:
###Code
mode = 'train'
vocab_size = 256
model_dimension = 512
n_layers = 2
GRU = tl.Serial(
tl.ShiftRight(mode=mode), # Do remember to pass the mode parameter if you are using it for interence/test as default is train
tl.Embedding(vocab_size=vocab_size, d_feature=model_dimension),
[tl.GRU(n_units=model_dimension) for _ in range(n_layers)], # You can play around n_layers if you want to stack more GRU layers together
tl.Dense(n_units=vocab_size),
tl.LogSoftmax()
)
###Output
_____no_output_____
###Markdown
Next is a helper function that prints information for every layer (sublayer within `Serial`):_Try changing the parameters defined before the GRU model and see how it changes!_
###Code
def show_layers(model, layer_prefix="Serial.sublayers"):
print(f"Total layers: {len(model.sublayers)}\n")
for i in range(len(model.sublayers)):
print('========')
print(f'{layer_prefix}_{i}: {model.sublayers[i]}\n')
show_layers(GRU)
###Output
Total layers: 6
========
Serial.sublayers_0: ShiftRight(1)
========
Serial.sublayers_1: Embedding_256_512
========
Serial.sublayers_2: GRU_512
========
Serial.sublayers_3: GRU_512
========
Serial.sublayers_4: Dense_256
========
Serial.sublayers_5: LogSoftmax
|
Summarize_Stock_News_using_Python_and_Deep_Learning_from_Web_Scraping_.ipynb | ###Markdown
1. Install and Import Baseline Dependencies
###Code
!pip install transformers
pip install sentencepiece
from transformers import PegasusTokenizer, PegasusForConditionalGeneration
from bs4 import BeautifulSoup
import requests
###Output
_____no_output_____
###Markdown
2. Setup Summarization Model
###Code
model_name = "human-centered-summarization/financial-summarization-pegasus"
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name)
###Output
_____no_output_____
###Markdown
3. Summarize a Single Article
###Code
url = "https://au.finance.yahoo.com/news/china-restricting-tesla-use-uncovers-a-significant-challenge-for-elon-musk-expert-161921664.html"
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
paragraphs = soup.find_all('p')
paragraphs
paragraphs[0].text
text = [paragraph.text for paragraph in paragraphs]
words = ' '.join(text).split(' ')[:400]
ARTICLE = ' '.join(words)
ARTICLE
input_ids = tokenizer.encode(ARTICLE, return_tensors='pt')
output = model.generate(input_ids, max_length=110, num_beams=5, early_stopping=True)
summary = tokenizer.decode(output[0], skip_special_tokens=True)
summary
###Output
_____no_output_____
###Markdown
4. Building a News and Sentiment Pipeline
###Code
monitored_tickers = ['DOGE', 'TSLA', 'BTC']
###Output
_____no_output_____
###Markdown
4.1. Search for Stock News using Google and Yahoo Finance
###Code
def search_for_stock_news_urls(ticker):
search_url = "https://www.google.com/search?q=yahoo+finance+{}&tbm=nws".format(ticker)
r = requests.get(search_url)
soup = BeautifulSoup(r.text, 'html.parser')
atags = soup.find_all('a')
hrefs = [link['href'] for link in atags]
return hrefs
raw_urls = {ticker:search_for_stock_news_urls(ticker) for ticker in monitored_tickers}
raw_urls
raw_urls['DOGE']
###Output
_____no_output_____
###Markdown
4.2. Strip out unwanted URLs
###Code
import re
exclude_list = ['maps', 'policies', 'preferences', 'accounts', 'support']
def strip_unwanted_urls(urls, exclude_list):
val = []
for url in urls:
if 'https://' in url and not any(exclude_word in url for exclude_word in exclude_list):
res = re.findall(r'(https?://\S+)', url)[0].split('&')[0]
val.append(res)
return list(set(val))
cleaned_urls = {ticker:strip_unwanted_urls(raw_urls[ticker], exclude_list) for ticker in monitored_tickers}
cleaned_urls
###Output
_____no_output_____
###Markdown
4.3. Search and Scrape Cleaned URLs
###Code
def scrape_and_process(URLs):
ARTICLES = []
for url in URLs:
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
paragraphs = soup.find_all('p')
text = [paragraph.text for paragraph in paragraphs]
words = ' '.join(text).split(' ')[:350]
ARTICLE = ' '.join(words)
ARTICLES.append(ARTICLE)
return ARTICLES
articles = {ticker:scrape_and_process(cleaned_urls[ticker]) for ticker in monitored_tickers}
articles
articles['TSLA'][2]
###Output
_____no_output_____
###Markdown
4.4. Summarise all Articles
###Code
def summarize(articles):
summaries = []
for article in articles:
input_ids = tokenizer.encode(article, return_tensors='pt')
output = model.generate(input_ids, max_length=55, num_beams=5, early_stopping=True)
summary = tokenizer.decode(output[0], skip_special_tokens=True)
summaries.append(summary)
return summaries
summaries = {ticker:summarize(articles[ticker]) for ticker in monitored_tickers}
summaries
summaries['BTC']
summaries['DOGE']
summaries['TSLA']
###Output
_____no_output_____
###Markdown
5. Adding Sentiment Analysis
###Code
from transformers import pipeline
sentiment = pipeline('sentiment-analysis')
sentiment(summaries['BTC'])
scores = {ticker:sentiment(summaries[ticker]) for ticker in monitored_tickers}
scores
print(summaries['DOGE'][3], scores['DOGE'][3]['label'], scores['DOGE'][3]['score'])
scores['BTC'][0]['score']
###Output
_____no_output_____
###Markdown
6. Exporting Results to CSV
###Code
summaries
scores
cleaned_urls
range(len(summaries['DOGE']))
summaries['DOGE'][3]
def create_output_array(summaries, scores, urls):
output = []
for ticker in monitored_tickers:
for counter in range(len(summaries[ticker])):
output_this = [
ticker,
summaries[ticker][counter],
scores[ticker][counter]['label'],
scores[ticker][counter]['score'],
urls[ticker][counter]
]
output.append(output_this)
return output
final_output = create_output_array(summaries, scores, cleaned_urls)
final_output
final_output.insert(0, ['Ticker', 'Summary', 'Label', 'Confidence', 'URL'])
final_output
import csv
with open('Final_Result.csv', mode='w', newline='') as f:
csv_writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
csv_writer.writerows(final_output)
###Output
_____no_output_____ |
docs/notebooks/cost_models.ipynb | ###Markdown
Cost ModelsTopfarm now comes with two built-in cost models. Additional user-defined cost models can easily be integrated as well. In this example, the ability to switch from using each of the two models is demonstrated. [Try this yourself](https://colab.research.google.com/github/DTUWindEnergy/TopFarm2/blob/master/docs/notebooks/cost_models.ipynb) (requires google account)
###Code
%%capture
# Install Topfarm if needed
import importlib
if not importlib.util.find_spec("topfarm"):
!pip install topfarm
###Output
_____no_output_____
###Markdown
Cost Model 1: DTU implementation of the NREL Cost and Scaling ModelThe first cost model in Topfarm is a python implementation of the National Renewable Energy Laboratory (NREL) Cost and Scaling Model. The report on which the model is based can be found here:https://www.nrel.gov/docs/fy07osti/40566.pdfThe model was developed from the early to mid-2000s as part of the Wind Partnership for Advanced Component Technology (WindPACT) which explored innovative turbine design (at that time) as well as innovations on the balance of plant and operations. Several detailed design studies on the turbine and plant design can cost were made. For publications associated with the WindPACT program, see: [WindPACT publication list](http://nrel-primo.hosted.exlibrisgroup.com/primo_library/libweb/action/search.do;jsessionid=00F1EA4B14428BED000D0D1E7C0E2C46?fn=search&ct=search&initialSearch=true&mode=Basic&tab=default_tab&indx=1&dum=true&srt=rank&vid=Pubs&frbg=&vl%28freeText0%29=windpact&scp.scps=scope%3A%28PUBS%29%2Cscope%3A%28NREL_INTERNAL%29&vl%28870446075UI1%29=all_items)From the WindPACT studies, the NREL cost and scaling model was developed as a set of curve-fits to the underlying detailed design data and includes:* Turbine component masses and costs* Balance of system costs* Operational expenditures* Financing and other costsOver time, changes in turbine and plant technology have rendered the NREL cost and scaling model obselete, but it is still useful as a publicly available, full levelized cost of energy (LCOE) model for wind energy. **Import Topfarm models to set up an LCOE workflow including the cost model**
###Code
# Import numerical python
import numpy as np
# Import pywake models including the IEA Wind Task 37 case study site, the Gaussian wake model and the AEP calculator
from py_wake.examples.data.iea37._iea37 import IEA37_WindTurbines, IEA37Site
from py_wake.deficit_models.gaussian import IEA37SimpleBastankhahGaussian
# Import Topfarm implementation of NREL Cost and Scaling model
from topfarm.cost_models.economic_models.turbine_cost import economic_evaluation as ee_1
# Import Topfarm constraints for site boundary and spacing
from topfarm.drivers.random_search_driver import RandomizeTurbinePosition_Circle
from topfarm.constraint_components.boundary import CircleBoundaryConstraint
from topfarm.constraint_components.spacing import SpacingConstraint
# Import Topfarm support classes for setting up problem and workflow
from topfarm.cost_models.cost_model_wrappers import CostModelComponent, AEPCostModelComponent
from topfarm.cost_models.py_wake_wrapper import PyWakeAEPCostModelComponent
from topfarm import TopFarmGroup, TopFarmProblem
from topfarm.plotting import XYPlotComp, NoPlot
# Import Topfarm implementation of Random Search or Scipy drivers
from topfarm.easy_drivers import EasyRandomSearchDriver
from topfarm.easy_drivers import EasyScipyOptimizeDriver
from topfarm.easy_drivers import EasySimpleGADriver
###Output
_____no_output_____
###Markdown
**Set up plotting capability**
###Code
try:
import matplotlib.pyplot as plt
plt.gcf()
plot_comp = XYPlotComp()
plot = True
except RuntimeError:
plot_comp = NoPlot()
plot = False
###Output
_____no_output_____
###Markdown
**Set up IEA Wind Task 37 case study site with 16 turbines.**
###Code
# site set up
n_wt = 16 # number of wind turbines
site = IEA37Site(n_wt) # site is the IEA Wind Task 37 site with a circle boundary
windTurbines = IEA37_WindTurbines() # wind turbines are the IEA Wind Task 37 3.4 MW reference turbine
wake_model = IEA37SimpleBastankhahGaussian(site, windTurbines) # select the Gaussian wake model
# vectors for turbine properties: diameter, rated power and hub height
# these are inputs to the cost model
Drotor_vector = [windTurbines.diameter()] * n_wt
power_rated_vector = [float(windTurbines.power(20)/1000)] * n_wt
hub_height_vector = [windTurbines.hub_height()] * n_wt
###Output
_____no_output_____
###Markdown
**Set up functions for the AEP and cost calculations. Here we are using the internal rate of return (IRR) as our financial metric of interest.**
###Code
# function for calculating aep as a function of x,y positions of the wind turbiens
def aep_func(x, y, **kwargs):
return wake_model(x, y).aep().sum(['wd','ws']).values*10**6
# function for calculating overall internal rate of return (IRR)
def irr_func(aep, **kwargs):
my_irr = ee_1(Drotor_vector, power_rated_vector, hub_height_vector, aep).calculate_irr()
print(my_irr)
return my_irr
###Output
_____no_output_____
###Markdown
**Now set up a problem to run an optimization using IRR as the objective function. Note that the turbines are fixed so the main driver changing the IRR will be the AEP as the turbine positions change.**
###Code
# create an openmdao component for aep and irr to add to the problem
aep_comp = CostModelComponent(input_keys=['x','y'], n_wt=n_wt, cost_function=aep_func, output_key="aep", output_unit="GWh", objective=False, output_val=np.zeros(n_wt))
irr_comp = CostModelComponent(input_keys=['aep'], n_wt=n_wt, cost_function=irr_func, output_key="irr", output_unit="%", objective=True, income_model=True)
# create a group for the aep and irr components that links their common input/output (aep)
irr_group = TopFarmGroup([aep_comp, irr_comp])
# add the group to an optimization problem and specify the design variables (turbine positions),
# cost function (irr_group), driver (random search), and constraints (circular boundary and spacing)
problem = TopFarmProblem(
design_vars=dict(zip('xy', site.initial_position.T)),
cost_comp=irr_group,
driver=EasyRandomSearchDriver(randomize_func=RandomizeTurbinePosition_Circle(), max_iter=50),
#driver=EasyScipyOptimizeDriver(optimizer='COBYLA', maxiter=200, tol=1e-6, disp=False),
#driver=EasySimpleGADriver(max_gen=100, pop_size=5, Pm=None, Pc=.5, elitism=True, bits={}),
constraints=[SpacingConstraint(200),
CircleBoundaryConstraint([0, 0], 1300.1)],
plot_comp=plot_comp)
# assign data from optimizationn to a set of accessible variables and run the optimization
cost, state, recorder = problem.optimize()
###Output
59.15317035889765
###Markdown
Exercise!!**Play with the driver above to see if an improved objective function can be obtained.** DTU Cost ModelThe DTU Cost Model is based on more recent industry data. It has a similar structure to the NREL cost and scaling model and contains the major elements to calculate LCOE, IRR etc. One key innovation of the DTU Cost model compared to the NREL cost and scaling model is the use of a detailed financial cash flow analysis. This is not yet implemented but will be implemented in a future version.For more information on the DTU Cost model see its background here:https://topfarm.pages.windenergy.dtu.dk/TopFarm2/user_guide.htmldtu-cost-modeland the source code documentation here:https://topfarm.pages.windenergy.dtu.dk/TopFarm2/api_reference/dtucost.html **Import the new DTU Cost model**
###Code
#import the DTU cost model
from topfarm.cost_models.economic_models.dtu_wind_cm_main import economic_evaluation as ee_2
###Output
_____no_output_____
###Markdown
**Set up the site and inputs as before but with additional cost variables.**
###Code
# site set up
n_wt = 16 # number of wind turbines
site = IEA37Site(n_wt) # site is the IEA Wind Task 37 site with a circle boundary
windTurbines = IEA37_WindTurbines() # wind turbines are the IEA Wind Task 37 3.4 MW reference turbine
wake_model = IEA37SimpleBastankhahGaussian(site, windTurbines) # select the Gaussian wake model
AEPComp = PyWakeAEPCostModelComponent(wake_model, n_wt) # set up AEP caculator to use Gaussiann model
# vectors for turbine properties: diameter, rated power and hub height
# these are inputs to the cost model
Drotor_vector = [windTurbines.diameter()] * n_wt
power_rated_vector = [float(windTurbines.power(20))*1e-6] * n_wt
hub_height_vector = [windTurbines.hub_height()] * n_wt
# add additional cost model inputs for shore distance, energy price, project lifetime, rated rotor speed and water depth
distance_from_shore = 30 # [km]
energy_price = 0.1 # [Euro/kWh] What we get per kWh
project_duration = 20 # [years]
rated_rpm_array = [12] * n_wt # [rpm]
water_depth_array = [15] * n_wt # [m]
###Output
_____no_output_____
###Markdown
**Set up the cost function to use the new DTU cost model.**
###Code
# set up function for new cost model with initial inputs as set above
eco_eval = ee_2(distance_from_shore, energy_price, project_duration)
# function for calculating aep as a function of x,y positions of the wind turbiens
def aep_func(x, y, **kwargs):
return wake_model(x, y).aep().sum(['wd','ws']).values*10**6
# function for calculating overall internal rate of return (IRR)
def irr_func(aep, **kwargs):
eco_eval.calculate_irr(rated_rpm_array, Drotor_vector, power_rated_vector, hub_height_vector, water_depth_array, aep)
print(eco_eval.IRR)
return eco_eval.IRR
###Output
_____no_output_____
###Markdown
**Set up rest of problem just as in prior example and run optimization with new model.**
###Code
# create an openmdao component for aep and irr to add to the problem
aep_comp = CostModelComponent(input_keys=['x','y'], n_wt=n_wt, cost_function=aep_func, output_key="aep", output_unit="kWh", objective=False, output_val=np.zeros(n_wt))
irr_comp = CostModelComponent(input_keys=['aep'], n_wt=n_wt, cost_function=irr_func, output_key="irr", output_unit="%", objective=True, income_model=True)
# create a group for the aep and irr components that links their common input/output (aep)
irr_group = TopFarmGroup([aep_comp, irr_comp])
# add the group to an optimization problem and specify the design variables (turbine positions),
# cost function (irr_group), driver (random search), and constraints (circular boundary and spacing)
problem = TopFarmProblem(
design_vars=dict(zip('xy', site.initial_position.T)),
cost_comp=irr_group,
driver=EasyRandomSearchDriver(randomize_func=RandomizeTurbinePosition_Circle(), max_iter=50),
constraints=[SpacingConstraint(200),
CircleBoundaryConstraint([0, 0], 1300.1)],
plot_comp=plot_comp)
# assign data from optimizationn to a set of accessible variables and run the optimization
cost, state, recorder = problem.optimize()
###Output
21.446512022323127
21.446512022323127
21.446512022323127
21.17893667262343
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.437270403804053
21.446512022323127
21.446512022323127
21.446512022323127
21.03553821343114
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
20.992255096835912
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.363733476893998
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.446512022323127
21.48852558866341
21.48852558866341
21.48852558866341
21.48852558866341
21.281585607342723
21.48852558866341
21.48852558866341
21.48852558866341
21.48852558866341
21.25922672347491
21.48852558866341
21.48852558866341
21.48852558866341
21.00257651418671
21.48852558866341
21.48852558866341
21.263403230547297
21.48852558866341
21.48852558866341
21.48852558866341
21.456301805729705
21.48852558866341
21.277472110151873
21.48852558866341
21.577302623117035
21.577302623117035
21.577302623117035
20.992957850395257
21.577302623117035
21.290930849806333
21.577302623117035
21.577302623117035
21.462797679778454
21.577302623117035
21.577302623117035
21.577302623117035
21.43515397737461
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.57196544375224
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.538581645219246
21.577302623117035
21.100004439465756
21.577302623117035
21.466435149229703
21.577302623117035
21.577302623117035
21.577302623117035
21.06296472635949
21.577302623117035
21.55833401755942
21.577302623117035
21.577302623117035
21.470572287468848
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.50161511285176
21.577302623117035
21.577302623117035
21.577302623117035
21.570150836228173
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.35153427253387
21.577302623117035
21.501613537685117
21.577302623117035
21.577302623117035
21.130918497068162
21.577302623117035
21.57438356618886
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.558036556377314
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.45434911494344
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.22252856494111
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.37875444182682
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.577302623117035
21.502203177340306
21.577302623117035
21.577302623117035
21.577302623117035
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.500772093175048
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.565941646924557
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.58873526794094
21.652427276781584
21.313762796093584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.191123490112407
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.45933698990017
21.652427276781584
21.652427276781584
21.652427276781584
21.652427276781584
21.172905361146952
21.652427276781584
21.52339463106443
21.652427276781584
21.652427276781584
21.257246752291238
21.652427276781584
21.50614430327602
21.652427276781584
21.652427276781584
21.652427276781584
21.391320185168716
21.652427276781584
21.647446945777894
21.652427276781584
21.43411865417406
21.652427276781584
21.652427276781584
21.652427276781584
21.144746997831287
21.652427276781584
|
Q1 PartD/Q1 PartD - Monthly/MiniProj_GRU_RMSprop_MSE_Q1_PartD_Pytorch_Monthly.ipynb | ###Markdown
**Data Pre Processing**
###Code
DATA_DIR = "Beijing-Pollution-DataSet/"
from pandas import read_csv
from datetime import datetime
from random import randint
def select_month(sequences, n_samples=250):
X, y = list(), list()
rand_hour = randint(0, 24)
rand_day = randint(0, 7)
for i in range(0, n_samples):
start_ix = rand_hour + rand_day*24 + 672 * i # 168 : Week hours!
idxs = []
for j in range(0, 4):
if j <=2:
idx = start_ix + (j * 168) # Add different weeks
idxs.append(idx)
if j == 3: # Target
idy = start_ix + (j * 168)
seq_x = sequences[idxs, :]
seq_y = sequences[idy, 0]
y.append(seq_y)
X.append(seq_x)
return X, y
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps, n_samples=12000, start_from=0):
X, y = list(), list()
for i in range(start_from, (start_from + n_samples)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the dataset
# if end_ix > len(sequences):
# break
# gather input and output parts of the pattern
seq_x = sequences[i:end_ix, :]
seq_y = sequences[end_ix, 0]
y.append(seq_y)
X.append(seq_x)
return array(X), array(y)
# load dataset
DATA_DIR = "Beijing-Pollution-DataSet/"
data = np.load(DATA_DIR + 'polution_dataSet.npy')
scaled_data = data
x, y = select_month(data, n_samples=65)
print("X shape => ", np.array(x).shape)
print("y shape => ", np.array(y).shape)
x = np.array(x)
y = np.array(y)
dataset = data
train_X, train_y = x[0:50], y[0:50] #split_sequences(dataset, n_timesteps, n_samples=15000, start_from=0)
valid_X, valid_y = x[50:], y[50:] #split_sequences(dataset, n_timesteps, n_samples=3000, start_from=15000)
test_loader_X = torch.utils.data.DataLoader(dataset=(train_X), batch_size=20, shuffle=False)
# train_X = torch.tensor(train_X, dtype=torch.float32)
# train_y = torch.tensor(train_y, dtype=torch.float32)
print("Train X Shape :=> ", train_X.shape)
print("Train Y Shape :=> ", train_y.shape)
print("####################################")
print("Test X Shape :=> ", valid_X.shape)
print("Test Y Shape :=> ", valid_y.shape)
class GRU(torch.nn.Module):
def __init__(self, n_features=8, n_output=1, seq_length=11, n_hidden_layers=233, n_layers=1):
super(GRU, self).__init__()
self.n_features = n_features
self.seq_len = seq_length
self.n_output = n_output
self.n_hidden = n_hidden_layers # number of hidden states
self.n_layers = n_layers # number of LSTM layers (stacked)
# define RNN with specified parameters
# bath_first means that the first dim of the input and output will be the batch_size
self.rnn = nn.GRU(input_size=self.n_features,
hidden_size=self.n_hidden,
num_layers=self.n_layers,
batch_first=True)
# last, fully connected layer
self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, self.n_output)
def forward(self, x, hidden):
# hidden_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_()
# cell_state = torch.zeros(self.n_layers, x.size(0), self.n_hidden).requires_grad_()
batch_size = x.size(0)
rnn_out, hidden = self.rnn(x, hidden)
rnn_out = rnn_out.contiguous().view(batch_size, -1)
# lstm_out(with batch_first = True) is
# (batch_size,seq_len,num_directions * hidden_size)
# for following linear layer we want to keep batch_size dimension and merge rest
# .contiguous() -> solves tensor compatibility error
out = self.l_linear(rnn_out)
return out, hidden
torch.manual_seed(13)
model = GRU(n_features=8, n_output=1, seq_length=3, n_hidden_layers=233, n_layers=1)
criterion = nn.MSELoss()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.001)
model = model#.to(device)
criterion = criterion#.to(device)
for p in model.parameters():
print(p.numel())
import time
start_time = time.time()
hidden = None
hidden_test = None
epochs = 200
model.train()
batch_size = 5
running_loss_history = []
val_running_loss_history = []
for epoch in range(epochs):
running_loss = 0.0
val_running_loss = 0.0
model.train()
for b in range(0, len(train_X), batch_size):
inpt = train_X[b:b+batch_size, :, :]
target = train_y[b:b+batch_size]
# print("Input Shape :=> ", inpt.shape)
x_batch = torch.tensor(inpt, dtype=torch.float32)
y_batch = torch.tensor(target, dtype=torch.float32)
output, hidden = model(x_batch, hidden)
hidden = hidden.data
loss = criterion(output.view(-1), y_batch)
running_loss += loss.item()
loss.backward()
optimizer.step()
optimizer.zero_grad()
else:
with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false
model.eval()
for b in range(0, len(valid_X), batch_size):
inpt = valid_X[b:b+batch_size, :, :]
target = valid_y[b:b+batch_size]
x_batch_test = torch.tensor(inpt, dtype=torch.float32)
y_batch_test = torch.tensor(target, dtype=torch.float32)
# model.init_hidden(x_batch_test.size(0))
output_test, hidden_test = model(x_batch_test, hidden_test)
hidden_test = hidden_test.data
loss_valid = criterion(output_test.view(-1), y_batch_test)
val_running_loss += loss_valid.item()
val_epoch_loss = val_running_loss / len(valid_X)
val_running_loss_history.append(val_epoch_loss)
epoch_loss = running_loss / len(train_X)
running_loss_history.append(epoch_loss)
print('step : ' , epoch , ' Train loss : ' , epoch_loss, ', Valid Loss : => ', val_epoch_loss)
print("***->>>-----------------------------------------------<<<-***")
total_time = time.time() - start_time
print("===========================================================")
print("*********************************************************")
print("The total Training Time is Equal with ==> : {0} Sec.".format(total_time))
print("*********************************************************")
print("===========================================================")
f, ax = plt.subplots(1, 1, figsize=(10, 7))
plt.title("Train & Valid Loss - GRU", fontsize=18)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.plot(running_loss_history, label='Train')
plt.plot(val_running_loss_history, label='Test')
# pyplot.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
test_x, test_y = x[50:], y[50:]
model.eval()
test_x = torch.tensor(test_x, dtype=torch.float32)
test_y = torch.tensor(test_y, dtype=torch.float32)
res, hid = model(test_x, None)
loss_test = criterion(res.view(-1), test_y)
future = 100
window_size = 11
# preds = dataset[15000:15100, 0].tolist()
# print(len(preds))
# print(preds)
# for i in range (future):
# # seq = torch.FloatTensor(preds[-window_size:])
# with torch.no_grad():
# # seq = torch.tensor(seq, dtype=torch.float32).view(1, 11, 8)
# # model.hidden = (torch.zeros(1, 1, model.hidden_size),
# # torch.zeros(1, 1, model.hidden_size))
# preds.append(model(seq))
# print(preds[11:])
fig = plt.figure(figsize=(20, 7))
plt.title("Beijing Polution Prediction - GRU", fontsize=18)
plt.ylabel('Polution')
plt.xlabel('Num data')
plt.grid(True)
plt.autoscale(axis='x', tight=True)
fig.autofmt_xdate()
plt.plot(test_y, label="Real")
plt.plot(res.detach().numpy(), label="Prediction")
plt.legend()
plt.show()
test_x, test_y = x[50:], y[50:]
model.eval()
test_running_loss = 0
with torch.no_grad(): # it will temprerorerly set all the required grad flags to be false
model.eval()
for b in range(0, len(test_x), batch_size):
inpt = test_x[b:b+batch_size, :, :]
target = test_y[b:b+batch_size]
x_batch_test = torch.tensor(inpt, dtype=torch.float32)
y_batch_test = torch.tensor(target, dtype=torch.float32)
# model.init_hidden(x_batch_test.size(0))
output_test, hidden_test = model(x_batch_test, hidden_test)
hidden_test = hidden_test.data
loss_test = criterion(output_test.view(-1), y_batch_test)
test_running_loss += loss_test.item()
test_epoch_loss = test_running_loss / len(test_x)
print("##########################################################")
print(">>>>---------------------------------------------------<<<<")
print(">>>>----------***************************--------------<<<<")
print("**** Test Loss :==>>> ", test_epoch_loss)
print(">>>>----------***************************--------------<<<<")
print(">>>>---------------------------------------------------<<<<")
print("##########################################################")
###Output
_____no_output_____ |
experimentation_analysis-EDA.ipynb | ###Markdown
Load metrics
###Code
import pandas as pd
df_alg=pd.read_csv('../output/metrics2309/merged_output_umda.txt')
convert_dict = {'Dataset': "string",
'Algorithm': "string",
'Population Length': "int64",
'Generations': "int64",
'Time(s)': "float64",
'AvgValue': "float64",
'BestAvgValue': "float64",
'BestGeneration': "int64",
'HV': "float64",
'Spread': "float64",
'NumSolutions': "float64",
'Spacing': "float64",
'NumGenerations': "int64"
}
#df_alg = df_alg.astype(convert_dict)
df_alg2=pd.read_csv('../output/metrics2309/merged_output_pbil.txt',header=0)
convert_dict = {'Dataset': "string",
'Algorithm': "string",
'Population Length': "int64",
'Generations': "int64",
'Time(s)': "float64",
'AvgValue': "float64",
'BestAvgValue': "float64",
'HV': "float64",
'Spread': "float64",
'NumSolutions': "int64",
'Spacing': "float64",
'NumGenerations': "int64"
}
#df_alg2 = df_alg2.astype(convert_dict)
#display(df_alg2.head(200))
df_alg3=pd.read_csv('../output/metrics2309/merged_output_grasp.txt')
#df_alg3.astype({'Evaluations': 'int64'}).dtypes
df_alg = df_alg.append(df_alg2)
df_alg = df_alg.append(df_alg3)
display(df_alg.head(200))
configs=["Algorithm","Dataset"]
df_a=(
df_alg[df_alg["Evaluations"]==10000].groupby(configs)\
[['Time(s)', 'HV', 'Spread',"AvgValue","NumSolutions","Spacing","Requirements per sol","NumEvaluations"]]\
.agg(mean_time=('Time(s)', 'mean'),
mean_hv=('HV', 'mean'),
mean_spread=('Spread', 'mean'),
mean_avgvalue=('AvgValue', 'mean'),
mean_numsolutions=('NumSolutions', 'mean'),
mean_spacing=('Spacing', 'mean'),
mean_reqs_per_sol=('Requirements per sol', 'mean'),
mean_numevaluations=('NumEvaluations', 'mean'),
)
)
display(df_a)
df_a=(
df_alg3[df_alg3["Evaluations"]==0].groupby(configs)\
[['Time(s)', 'HV', 'Spread',"AvgValue","NumSolutions","Spacing","Requirements per sol","NumEvaluations"]]\
.agg(mean_time=('Time(s)', 'mean'),
mean_hv=('HV', 'mean'),
mean_spread=('Spread', 'mean'),
mean_avgvalue=('AvgValue', 'mean'),
mean_numsolutions=('NumSolutions', 'mean'),
mean_spacing=('Spacing', 'mean'),
mean_reqs_per_sol=('Requirements per sol', 'mean'),
mean_numevaluations=('NumEvaluations', 'mean'),
)
)
display(df_a)
###Output
_____no_output_____
###Markdown
Pareto analysisChange ```dataset``` value to load different dataset Paretos
###Code
import matplotlib.pyplot as plt
import numpy as np
from algorithms.GRASP.GRASP import GRASP
from algorithms.genetic.nsgaii.nsgaii_algorithm import NSGAIIAlgorithm
from algorithms.genetic.geneticnds.geneticnds_algorithm import GeneticNDSAlgorithm
from algorithms.EDA.UMDA.umda_algorithm import UMDAAlgorithm
from algorithms.EDA.PBIL.pbil_algorithm import PBILAlgorithm
plt.rcParams['figure.figsize'] = [16, 10]
plt.rcParams['figure.dpi'] = 200 # 200 e.g. is really fine, but slower
sizes=[30,25,20,15,10,7,5]
markers=["+","x","s","v","h","o"]
labels=["Random","GPPR-MO-PR"]
datasets=["1","2","s1","s2","s3"]
dataset="s2"
seed=10
generations=100
solutions_per_iteration=100
population_length=100
gens_genetic=100
algorithms = [
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=solutions_per_iteration,seed=seed,
init_type="uniform",local_search_type="None",path_relinking_mode="None"),
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=solutions_per_iteration,seed=seed,
init_type="stochastically",local_search_type="best_first_neighbor_random_domination",
path_relinking_mode="after_local"),
#UMDAAlgorithm(dataset_name=dataset,population_length=population_length,max_generations=generations,
# max_evaluations=0,selected_individuals=50)
]
for i in range(len(algorithms)):
if "GRASP" in algorithms[i].file:
file = "../output/output/pareto-grasp-"+algorithms[i].file
else:
file = "../output/output/pareto-genetic-"+algorithms[i].file
data = np.loadtxt(file,delimiter=',', dtype=float)
x,y=data.T
plt.scatter(x,y,label=labels[i],s=50,marker=markers[i])
algorithm=UMDAAlgorithm(dataset_name=dataset,population_length=100,max_generations=100,
max_evaluations=0,selected_individuals=50,selection_scheme="nds",replacement_scheme="replacement")
result=algorithm.run()
func = [i.objectives for i in result["population"]]
function1 = [i[0].value for i in func]
function2 = [i[1].value for i in func]
plt.scatter(function2, function1, marker='o',label=algorithm.get_name())
#algorithm=PBILAlgorithm(dataset_name=dataset,population_length=100,max_generations=100,
# max_evaluations=0,learning_rate=0.2,mutation_prob=0.2,mutation_shift=0.2)
#result=algorithm.run()
#func = [i.objectives for i in result["population"]]
#function3 = [i[0].value for i in func]
#function4 = [i[1].value for i in func]
#plt.scatter(function4, function3, marker='s',label="PBIL")
#file = "output/backtracking.txt"
#data = np.loadtxt(file,delimiter=',', dtype=float)
#x,y=data.T
#plt.scatter(x,y,label="optimo",s=10)
plt.title(dataset)
plt.xlabel('Effort', fontsize=12)
plt.ylabel('Satisfaction', fontsize=12)
plt.legend(loc="lower right")
plt.title("Dataset "+dataset)
plt.grid(True)
#plt.show()
import os
import imageio
gif_name = "prueba"
filenames=[]
paretos = result["paretos"]
#print(len(paretos))
for pareto_index in range(len(paretos)):
plt.cla()
plt.clf()
#print(paretos[pareto_index],pareto_index)
for i in range(len(algorithms)):
if "GRASP" in algorithms[i].file:
file = "../output/output/pareto-grasp-"+algorithms[i].file
else:
file = "../output/output/pareto-genetic-"+algorithms[i].file
data = np.loadtxt(file,delimiter=',', dtype=float)
x,y=data.T
plt.scatter(x,y,label=labels[i],s=50,marker=markers[i])
plt.scatter(function2, function1, marker='o',label="UMDA")
#for j in paretos[pareto_index]:
#print(j.objectives)
func = [j.objectives for j in paretos[pareto_index]]
function3 = [j[0].value for j in func]
function4 = [j[1].value for j in func]
plt.scatter(function4, function3, marker='s',label="PBIL")
plt.title(dataset)
plt.xlabel('Effort', fontsize=12)
plt.ylabel('Satisfaction', fontsize=12)
plt.legend(loc="lower right")
plt.title("Dataset "+dataset)
plt.grid(True)
plt.draw()
#fig = plt.figure()
#plt.show()
#plt.plot(range(10))
filename = f'temp/temp'+str(pareto_index+1)+'.png'
filenames.append(filename)
plt.savefig(filename, dpi=100)
# Build GIF
print('Creating gif\n')
with imageio.get_writer(f'{gif_name}.gif', mode='I',fps=4) as writer:
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
for i in range(5):
writer.append_data(image)
print('Gif saved\n')
print('Removing Images\n')
# Remove files
for filename in set(filenames):
os.remove(filename)
print('DONE')
import os
import imageio
import matplotlib.pyplot as plt
import numpy as np
def createGIF(algorithms,input_folder="temp", output_filename="example_gif", dpi=100,fps=1,pareto_files=None):
plt.rcParams['figure.figsize'] = [16, 10]
plt.rcParams['figure.dpi'] = 200
alg_results=[]
filenames=[]
# execute each algorithm and store pareto steps
for alg_index in range(len(algorithms)):
if pareto_files is not None and pareto_files[alg_index]:
if "GRASP" in algorithms[alg_index].file:
file = "../output/output/pareto-grasp-"+algorithms[alg_index].file
else:
file = "../output/output/pareto-genetic-"+algorithms[alg_index].file
data = np.loadtxt(file,delimiter=',', dtype=float)
x,y=data.T
plt.scatter(x,y,label=algorithms[alg_index].get_name(),s=50#,marker=markers[i]
)
else:
result=algorithms[alg_index].run()
alg_results.append(result["paretos"])
#func = [i.objectives for i in result["population"]]
#function1 = [i[0].value for i in func]
#function2 = [i[1].value for i in func]
#plt.scatter(function2, function1,label=algorithms[alg_index].get_name())
# loop pareto steps and generate a frame with all points for all algorithms
for pareto_index in range(len(alg_results[0])):
plt.cla()
plt.clf()
# scatter all algorithm intermediate pareto results for a frame
for alg_index in range(len(algorithms)):
if pareto_files is not None and pareto_files[alg_index]:
if "GRASP" in algorithms[alg_index].file:
file = "../output/output/pareto-grasp-"+algorithms[alg_index].file
else:
file = "../output/output/pareto-genetic-"+algorithms[alg_index].file
data = np.loadtxt(file,delimiter=',', dtype=float)
x,y=data.T
print(f"NDS loaded from file has {len(x)} solution(s)")
plt.scatter(x,y,label=algorithms[alg_index].get_name(),s=50#,marker=markers[i]
)
else:
func = [j.objectives for j in alg_results[alg_index][pareto_index]]
functiony = [j[0].value for j in func]
functionx = [j[1].value for j in func]
plt.scatter(functionx, functiony,label=algorithms[alg_index].get_name())
# config frame
plt.xlabel('Effort', fontsize=12)
plt.ylabel('Satisfaction', fontsize=12)
plt.legend(loc="lower right")
plt.title("Dataset "+str(algorithms[0].dataset_name))
plt.grid(True)
plt.draw()
# store frame
#filename = f'input_folder+'/temp'+str(pareto_index+1)+'.png'
filename = f'{input_folder}/temp{str(pareto_index+1)}.png'
filenames.append(filename)
plt.savefig(filename, dpi=dpi)
# Build GIF
print('Creating gif\n')
with imageio.get_writer(f'{output_filename}.gif', mode='I',fps=fps) as writer:
for filename in filenames:
image = imageio.imread(filename)
writer.append_data(image)
for i in range(fps*5):
writer.append_data(image)
print('Gif saved\n')
print('Removing Images\n')
# Remove files
for filename in set(filenames):
os.remove(filename)
print('DONE')
from algorithms.GRASP.GRASP import GRASP
from algorithms.genetic.nsgaii.nsgaii_algorithm import NSGAIIAlgorithm
from algorithms.genetic.geneticnds.geneticnds_algorithm import GeneticNDSAlgorithm
from algorithms.EDA.UMDA.umda_algorithm import UMDAAlgorithm
from algorithms.EDA.PBIL.pbil_algorithm import PBILAlgorithm
dataset = "s2"
generations=100
solutions_per_iteration=100
population_length=100
gens_genetic=100
seed = 10
algorithms = [
PBILAlgorithm(dataset_name=dataset,population_length=population_length,max_generations=generations,
max_evaluations=0,learning_rate=0.2,mutation_prob=0.2,mutation_shift=0.2,debug_mode=True),
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=solutions_per_iteration,seed=seed,
init_type="stochastically",local_search_type="best_first_neighbor_random_domination",
path_relinking_mode="after_local",debug_mode=True),
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=solutions_per_iteration,seed=seed,
init_type="uniform",local_search_type="None",path_relinking_mode="None",debug_mode=True),
UMDAAlgorithm(dataset_name=dataset,population_length=100,max_generations=100,
max_evaluations=0,selected_individuals=50)
]
createGIF(algorithms,"temp", "prueba_gif", dpi=100,fps=5,pareto_files=[False,False,False,False])
im = imageio.get_reader('prueba_gif.gif')
for frame in im:
print(im.shape)
###Output
_____no_output_____
###Markdown
Metrics analysis
###Code
from sklearn import preprocessing
from scipy.stats import ranksums
import numpy as np
import plotly.graph_objects as go
import plotly.offline as pyo
import math
class AlgorithmDataGenetic():
def __init__(self,a,rs,d,p,g,ss,sc,cs,cp,ms,mp):
self.a=a
self.rs=rs
self.d=d
self.p=p
self.g=g
self.ss=ss
self.sc=sc
self.cs=cs
self.cp=cp
self.ms=ms
self.mp=mp
def findConfigurationData(self,df):
return df[(df["Population Length"]==self.p)&(df["Generations"]==self.g)
&(df["Selection Scheme"]==self.ss)&(df["Selection Candidates"]==self.sc)
&(df["Crossover Scheme"]==self.cs)&(df["Crossover Probability"]==self.cp)
&(df["Mutation Scheme"]==self.ms)&(df["Mutation Probability"]==self.mp)
&(df["Algorithm"]==self.a)&(df["Replacement Scheme"]==self.rs)
&(df["Dataset"]==self.d)
]
class AlgorithmDataGrasp():
def __init__(self,a,d,it,so,ini,ls,pr):
self.a=a
self.it=it
self.so=so
self.ls=ls
self.d=d
self.ini=ini
self.pr=pr
def findConfigurationData(self,df):
return df[(df["Iterations"]==self.it)&(df["Solutions per Iteration"]==self.so)
&(df["Local Search Type"]==self.ls)&(df["Initialization Type"]==self.ini)
&(df["Algorithm"]==self.a)&(df["Dataset"]==self.d)&(df["Path Relinking"]==self.pr)
]
dat="1"
datasets=["1","2","s1","s2","s3"]
cols=["HV","Spread","Spacing","NumSolutions","Time(s)"]
maxmin=[1,-1,1,1,-1]
for dat in datasets:
print("------Dataset "+dat+"-----")
algs = [
AlgorithmDataGenetic("GeneticNDSAlgorithm",'elitism',dat,100,100,"tournament",2,"onepoint",0.8,"flip1bit",1.0),
AlgorithmDataGenetic("NSGAIIAlgorithm",'elitism',dat,100,100,"tournament",2,"onepoint",0.6,"flip1bit",1.0),
AlgorithmDataGrasp("GRASP",dat,100,100,"stochastically","best_first_neighbor_random_domination","after_local"),
]
for j in range(len(cols)):
print(cols[j])
results=list()
best_avg=0
best_avgn=10**9
best_alg_index=None
for i in range(len(algs)):
avg=np.mean((algs[i].findConfigurationData(df_alg)[cols])[cols[j]].values)
results.append("{:.3f}".format(avg))
if maxmin[j]>0 and avg>best_avg:
best_avg=avg
best_alg_index=i
elif maxmin[j]<0 and avg<best_avgn:
best_avgn=avg
best_alg_index=i
p_best=True
p_list=[]
for i in range(len(algs)):
if i!=best_alg_index:
dataA=(algs[best_alg_index].findConfigurationData(df_alg)[cols])[cols[j]].values
dataB=(algs[i].findConfigurationData(df_alg)[cols])[cols[j]].values
_, p = ranksums(dataA, dataB)
print("p:",p)
if p>=0.05:
#print(dataA)
#print(dataB)
p_best=False
else:
p_list.append(i)
if p_best:
mark = '*'
else:
mark = ''
for index in p_list:
results[index]=results[index]+'-'
#results[best_alg_index]=results[best_alg_index]+mark
results.insert(0,cols[j])
print(results)
num_candidates = 30
a = np.random.choice(2,num_candidates)
print("a",a)
costs=np.array([1,2,3,4,5])
#indexes=np.array(a).nonzero()
#print( costs[indexes])
probabilities=np.full(num_candidates,1/num_candidates)
sampled = np.random.choice(np.arange(num_candidates), size=np.random.randint(num_candidates),
replace=False, p=probabilities)
print("sampled",sampled)
b = np.zeros(num_candidates)
b[sampled] = 1
print("b",b)
a = np.random.choice(2,num_candidates)
print("a",a)
print("a",a.nonzero())
import numpy as np
costs=[1,2,3,4,5]
print(np.sum(costs))
###Output
_____no_output_____
###Markdown
Bivariate MIMIC
###Code
import matplotlib.pyplot as plt
import numpy as np
from algorithms.GRASP.GRASP import GRASP
from algorithms.genetic.nsgaii.nsgaii_algorithm import NSGAIIAlgorithm
from algorithms.genetic.geneticnds.geneticnds_algorithm import GeneticNDSAlgorithm
from algorithms.EDA.UMDA.umda_algorithm import UMDAAlgorithm
from algorithms.EDA.bivariate.MIMIC.mimic_algorithm import MIMICAlgorithm
from algorithms.EDA.PBIL.pbil_algorithm import PBILAlgorithm
plt.rcParams['figure.figsize'] = [16, 10]
plt.rcParams['figure.dpi'] = 200 # 200 e.g. is really fine, but slower
sizes=[30,25,20,15,10,7,5]
markers=["+","x","s","v","h","o"]
labels=["Random","GPPR-MO-PR"]
datasets=["1","2","s1","s2","s3"]
dataset="2"
seed=10
generations=500
population_length=500
algorithms = [
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=population_length,seed=seed,
init_type="uniform",local_search_type="None",path_relinking_mode="None"),
GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=population_length,seed=seed,
init_type="stochastically",local_search_type="best_first_neighbor_random_domination",
path_relinking_mode="after_local"),
]
for i in range(len(algorithms)):
if "GRASP" in algorithms[i].file:
file = "../output/output/pareto-grasp-"+algorithms[i].file
else:
file = "../output/output/pareto-genetic-"+algorithms[i].file
data = np.loadtxt(file,delimiter=',', dtype=float)
x,y=data.T
plt.scatter(x,y,label=labels[i],s=50,marker=markers[i])
algorithm=MIMICAlgorithm(dataset_name=dataset,population_length=population_length,max_generations=generations,
max_evaluations=0,selection_scheme="nds")
result=algorithm.run()
func = [i.objectives for i in result["population"]]
function1 = [i[0].value for i in func]
function2 = [i[1].value for i in func]
plt.scatter(function2, function1, marker='o',label=algorithm.get_name())
plt.title(dataset)
plt.xlabel('Effort', fontsize=12)
plt.ylabel('Satisfaction', fontsize=12)
plt.legend(loc="lower right")
plt.title("Dataset "+dataset)
plt.grid(True)
###Output
_____no_output_____
###Markdown
Dependencies
###Code
import matplotlib.pyplot as plt
import numpy as np
from algorithms.GRASP.GRASP import GRASP
from algorithms.genetic.nsgaii.nsgaii_algorithm import NSGAIIAlgorithm
from algorithms.genetic.geneticnds.geneticnds_algorithm import GeneticNDSAlgorithm
from algorithms.EDA.UMDA.umda_algorithm import UMDAAlgorithm
from algorithms.EDA.bivariate.MIMIC.mimic_algorithm import MIMICAlgorithm
from algorithms.EDA.PBIL.pbil_algorithm import PBILAlgorithm
plt.rcParams['figure.figsize'] = [16, 10]
plt.rcParams['figure.dpi'] = 200 # 200 e.g. is really fine, but slower
sizes=[30,25,20,15,10,7,5]
markers=["+","x","s","v","h","o"]
dataset="s3"
seed=10
generations=10
population_length=10
tackle_dependencies=False
#algorithm=NSGAIIAlgorithm(dataset_name=dataset,max_generations=generations,population_length=population_length,
# random_seed=seed,crossover_prob=0.6,crossover="onepoint",mutation_prob=1.0,
# mutation="flip1bit",replacement="elitism",tackle_dependencies=tackle_dependencies)
#result=algorithm.run()
#func = [i.objectives for i in result["population"]]
#function1 = [i[0].value for i in func]
#function2 = [i[1].value for i in func]
#plt.scatter(function2, function1, marker='x',label=algorithm.get_name())
algorithm=GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=population_length,seed=seed,
init_type="uniform",local_search_type="None",path_relinking_mode="None",tackle_dependencies=tackle_dependencies)
result=algorithm.run()
func = [i for i in result["population"]]
function1 = [i.total_satisfaction for i in func]
function2 = [i.total_cost for i in func]
plt.scatter(function2, function1, marker='o',label=algorithm.get_name())
algorithm=GRASP(dataset=dataset,iterations=generations,solutions_per_iteration=population_length,seed=seed,
init_type="stochastically",local_search_type="best_first_neighbor_random_domination",
path_relinking_mode="after_local",tackle_dependencies=tackle_dependencies)
result=algorithm.run()
func = [i for i in result["population"]]
function1 = [i.total_satisfaction for i in func]
function2 = [i.total_cost for i in func]
plt.scatter(function2, function1, marker='+',label=algorithm.get_name())
#algorithm=MIMICAlgorithm(dataset_name=dataset,population_length=population_length,max_generations=generations,
# max_evaluations=0,selection_scheme="nds",tackle_dependencies=tackle_dependencies)
algorithm=NSGAIIAlgorithm(dataset_name=dataset,max_generations=generations,population_length=population_length,
random_seed=seed,crossover_prob=0.6,crossover="onepoint",mutation_prob=1.0,
mutation="flip1bit",replacement="elitism",tackle_dependencies=tackle_dependencies)
result=algorithm.run()
func = [i for i in result["population"]]
function1 = [i.total_satisfaction for i in func]
function2 = [i.total_cost for i in func]
plt.scatter(function2, function1, marker='v',label=algorithm.get_name())
plt.title(dataset)
plt.xlabel('Effort', fontsize=12)
plt.ylabel('Satisfaction', fontsize=12)
plt.legend(loc="lower right")
plt.title("Dataset "+dataset)
plt.grid(True)
from algorithms.EDA.UMDA.umda_algorithm import UMDAAlgorithm
from algorithms.EDA.PBIL.pbil_algorithm import PBILAlgorithm
dataset = "s2"
generations=100
solutions_per_iteration=100
population_length=100
gens_genetic=100
seed = 10
alg=UMDAAlgorithm(dataset_name=dataset,population_length=100,max_generations=100,
max_evaluations=0,selected_individuals=50)
print(alg.a)
###Output
a
|
vis/visulisation.ipynb | ###Markdown
Alexnet pretained on Imagenet
###Code
model = models.alexnet(pretrained=True)
# remove last fully-connected layer
new_classifier = nn.Sequential(*list(model.classifier.children())[:-1])
model.classifier = new_classifier
print model
for name, parameter in model.named_parameters():
if parameter.requires_grad:
print(name, parameter.shape)
print(' Total params: %.2fM' % (sum(p.numel() for p in model.parameters())/1000000.0))
def vis_square(data):
"""Take an array of shape (n, height, width) or (n, height, width, 3)
and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)"""
# normalize data for display
data = (data - data.min()) / (data.max() - data.min())
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = (((0, n ** 2 - data.shape[0]),
(0, 1), (0, 1)) # add some space between filters
+ ((0, 0),) * (data.ndim - 3)) # don't pad the last dimension (if there is one)
data = np.pad(data, padding, mode='constant', constant_values=1) # pad with ones (white)
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
plt.imshow(data); plt.axis('off')
# the parameters are a list of [weights, biases]
filters = model.named_parameters
print('filters', filters)
conv1 = model.features[0].weight
print conv1.shape
conv1_grad = model.features[0].weight.grad
print type(conv1_grad) # visible after backward computation
print conv1_grad.shape
print conv1[0]
conv1_filter = conv1.detach().numpy()
vis_square(conv1_filter.transpose(0, 2, 3, 1)) # ones (white)
idx = np.random.randint(192, size=1)
print 'idx', idx
conv3 = model.features[3].weight[idx].squeeze(0)
# print conv2
print conv3.shape
conv3_filter = conv3.detach().numpy()
vis_square(conv3_filter)
###Output
idx [167]
torch.Size([64, 5, 5])
###Markdown
Alexnet trained on Cifar 10
###Code
from
resume = '/media/jaden/DeepLearningCode/pytorch-classification/checkpoints/cifar10/alexnet/model_best.pth.tar'
checkpoint = torch.load(resume)
best_acc = checkpoint['best_acc']
model.load_state_dict(checkpoint['state_dict'])
print 'best_acc', best_acc
print 'model', model
###Output
_____no_output_____ |
ai/recommended/01_using_ai_services_for_analyzing_public_data.ipynb | ###Markdown
Using AI Services for Analyzing Images and Textby Manav Sehgal | on APR 30 2019 | by Tom Liu | on Dec 2020 | modified edition for AI Labs for Amazon Recognition and Amazon ComprehendSo far we have been working with structured data in flat files as our data source. What if the source is images and unstructured text. AWS AI services provide vision, transcription, translation, personalization, and forecasting capabilities without the need for training and deploying machine learning models. AWS manages the machine learning complexity, you just focus on the problem at hand and send required inputs for analysis and receive output from these services within your applications.Extending our open data analytics use case to New York Traffic let us use the AWS AI services to turn open data available in social media, Wikipedia, and other sources into structured datasets and insights.We will start by importing dependencies for AWS SDK, Python Data Frames, file operations, handeling JSON data, and display formatting. We will initialize the Rekognition client for use in the rest of this notebook.
###Code
import boto3
import pandas as pd
import io
import json
from IPython.display import display, Markdown, Image
import sagemaker
boto_session = boto3.Session()
region = boto_session.region_name
rekognition = boto3.client('rekognition', region)
bucket_name = sagemaker.Session().default_bucket()
prefix = "images"
# download image set for the lab
!wget https://df4l9poikws9t.cloudfront.net/images.zip
!unzip -d test_images images.zip
!aws s3 cp ./test_images s3://$bucket_name/$prefix/ --recursive --include "*.png" --include "*.jpg"
###Output
_____no_output_____
###Markdown
Show ImageWe will work with a number of images so we need a way to show these images within this notebook. Our function creates a public image URL based on S3 bucket and key as input.
###Code
def show_image(filename, img_width = 300):
return Image(filename = filename, width = img_width)
file_name = 'sydney-street-02-unsplash.jpg'
show_image(f'./test_images/{file_name}')
###Output
_____no_output_____
###Markdown
Image LabelsOne of use cases for traffic analytics is processing traffic CCTV imagery or social media uploads. Let's consider a traffic location where depending on number of cars, trucks, and pedestrians we can identify if there is a traffic jam. This insight can be used to better manage flow of traffic around the location and plan ahead for future use of this route.First step in this kind of analytics is to recognize that we are actually looking at an image which may represent a traffic jam. We create ``image_labels`` function which uses ``detect_lables`` Rekognition API to detect objects within an image. The function prints labels detected with confidence score.In the given example notice somewhere in the middle of the labels listing at 73% confidence the Rekognition computer vision model has actually determined a traffic jam.
###Code
def image_labels(bucket, key):
image_object = {'S3Object':{'Bucket': bucket,'Name': key}}
response = rekognition.detect_labels(Image=image_object)
for label in response['Labels']:
print('{} ({:.0f}%)'.format(label['Name'], label['Confidence']))
image_labels(bucket_name, f'images/{file_name}')
###Output
_____no_output_____
###Markdown
Tasks:* Try other image files, such as 'olive_*.png', 'gear*.png' & 'coffee*.png' under './test_images' folder. Image Label CountNow that we have a label detecting a traffic jam and some of the ingredients of a busy traffic location like pedestrians, trucks, cars, let us determine quantitative data for benchmarking different traffic locations. If we can count the number of cars, trucks, and persons in the image we can compare these numbers with other images. Our function does just that, it counts the number of instances of a matching label.
###Code
def image_label_count(bucket, key, match):
image_object = {'S3Object':{'Bucket': bucket,'Name': key}}
response = rekognition.detect_labels(Image=image_object)
count = 0
for label in response['Labels']:
if match in label['Name']:
for instance in label['Instances']:
count += 1
print(f'Found {match} {count} times.')
image_label_count(bucket_name, f'images/{file_name}', 'Car')
image_label_count(bucket_name, f'images/{file_name}', 'Truck')
image_label_count(bucket_name, f'images/{file_name}', 'Person')
###Output
_____no_output_____
###Markdown
Image TextAnother use case of traffic location analytics using social media content is to understand more about a traffic location and instance if there is an incident reported, like an accident, jam, or VIP movement. For a computer program to understand a random traffic location, it may help to capture any text within the image. The ``image_text`` function uses Amazon Rekognition service to detect text in an image.You will notice that the text recognition is capable to read blurry text like "The Lion King", text which is at a perspective like the bus route, text which may be ignored by the human eye like the address below the shoes banner, and even the text representing the taxi number. Suddenly the image starts telling a story programmatically, about what time it may represent, what are the landmarks, which bus route, which taxi number was on streets, and so on.
###Code
def image_text(bucket, key, sort_column='', parents=True):
response = rekognition.detect_text(Image={'S3Object':{'Bucket':bucket,'Name': key}})
df = pd.read_json(io.StringIO(json.dumps(response['TextDetections'])))
df['Width'] = df['Geometry'].apply(lambda x: x['BoundingBox']['Width'])
df['Height'] = df['Geometry'].apply(lambda x: x['BoundingBox']['Height'])
df['Left'] = df['Geometry'].apply(lambda x: x['BoundingBox']['Left'])
df['Top'] = df['Geometry'].apply(lambda x: x['BoundingBox']['Top'])
df = df.drop(columns=['Geometry'])
if sort_column:
df = df.sort_values([sort_column])
if not parents:
df = df[df['ParentId'] > 0]
return df
text_image_file = 'street-01-unsplash.jpg'
show_image(f'./test_images/{text_image_file}')
###Output
_____no_output_____
###Markdown
Sorting on ``Top`` column will keep the horizontal text together.
###Code
image_text(bucket_name, f'images/{text_image_file}', sort_column='Top', parents=False)
###Output
_____no_output_____
###Markdown
Tasks:* Try other image files, such as 'olive_coffee_shop_*.png'? Detect CelebsTraffic analytics may also involve detecting VIP movement to divert traffic or monitor security events. Detecting VIP in a scene starts with facial recognition. Our function ``detect_celebs`` works as well with political figures as it will with movie celebrities.
###Code
def detect_celebs(bucket, key, sort_column=''):
image_object = {'S3Object':{'Bucket': bucket,'Name': key}}
response = rekognition.recognize_celebrities(Image=image_object)
df = pd.DataFrame(response['CelebrityFaces'])
df['Width'] = df['Face'].apply(lambda x: x['BoundingBox']['Width'])
df['Height'] = df['Face'].apply(lambda x: x['BoundingBox']['Height'])
df['Left'] = df['Face'].apply(lambda x: x['BoundingBox']['Left'])
df['Top'] = df['Face'].apply(lambda x: x['BoundingBox']['Top'])
df = df.drop(columns=['Face'])
if sort_column:
df = df.sort_values([sort_column])
return(df)
show_image('./test_images/celeb-02-unsplash.jpg')
detect_celebs(bucket_name, 'images/celeb-02-unsplash.jpg', sort_column='Left')
###Output
_____no_output_____
###Markdown
Comprehend SyntaxIt is possible that many data sources represent natural language and free text. Understand structure and semantics from this unstructured text can help further our open data analytics use cases.Let us assume we are processing traffic updates for structured data so we can take appropriate actions. First step in understanding natural language is to break it up into grammaticaly syntax. Nouns like "today" can tell about a particular event like when is the event occuring. Adjectives like "snowing" and "windy" tell what is happening at that moment in time.
###Code
comprehend = boto3.client('comprehend', region)
traffic_update = """
It is snowing and windy today in New York. The temperature is 50 degrees Fahrenheit.
The traffic is slow 10 mph with several jams along the I-86.
"""
def comprehend_syntax(text):
response = comprehend.detect_syntax(Text=text, LanguageCode='en')
df = pd.read_json(io.StringIO(json.dumps(response['SyntaxTokens'])))
df['Tag'] = df['PartOfSpeech'].apply(lambda x: x['Tag'])
df['Score'] = df['PartOfSpeech'].apply(lambda x: x['Score'])
df = df.drop(columns=['PartOfSpeech'])
return df
comprehend_syntax(traffic_update)
###Output
_____no_output_____
###Markdown
Comprehend EntitiesMore insights can be derived by doing entity extraction from the natural langauage. These entities can be date, location, quantity, among others. Just few of the entities can tell a structured story to a program.
###Code
def comprehend_entities(text):
response = comprehend.detect_entities(Text=text, LanguageCode='en')
df = pd.read_json(io.StringIO(json.dumps(response['Entities'])))
return df
comprehend_entities(traffic_update)
###Output
_____no_output_____
###Markdown
Comprehend PhrasesAnalysis of phrases within narutal language text complements the other two methods for a program to better route the actions based on derived structure of the event.
###Code
def comprehend_phrases(text):
response = comprehend.detect_key_phrases(Text=text, LanguageCode='en')
df = pd.read_json(io.StringIO(json.dumps(response['KeyPhrases'])))
return df
comprehend_phrases(traffic_update)
###Output
_____no_output_____
###Markdown
Comprehend SentimentSentiment analysis is common for social media user generated content. Sentiment can give us signals on the users' mood when publishing such social data.
###Code
def comprehend_sentiment(text):
response = comprehend.detect_sentiment(Text=text, LanguageCode='en')
return response['SentimentScore']
comprehend_sentiment(traffic_update)
###Output
_____no_output_____
###Markdown
Type your thoughts and check the related sentiment?
###Code
comprehend_sentiment("")
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.