path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
.ipynb_checkpoints/Amazon_Reviews_ETL_starter_code-checkpoint.ipynb | ###Markdown
Load Amazon Data into Spark DataFrame
###Code
from pyspark import SparkFiles
url = ""
spark.sparkContext.addFile(url)
df = spark.read.option("encoding", "UTF-8").csv(SparkFiles.get(""), sep="\t", header=True, inferSchema=True)
df.show()
###Output
_____no_output_____
###Markdown
Create DataFrames to match tables
###Code
from pyspark.sql.functions import to_date
# Read in the Review dataset as a DataFrame
# Create the customers_table DataFrame
# customers_df = df.groupby("").agg({""}).withColumnRenamed("", "customer_count")
# Create the products_table DataFrame and drop duplicates.
# products_df = df.select([]).drop_duplicates()
# Create the review_id_table DataFrame.
# Convert the 'review_date' column to a date datatype with to_date("review_date", 'yyyy-MM-dd').alias("review_date")
# review_id_df = df.select([, to_date("review_date", 'yyyy-MM-dd').alias("review_date")])
# Create the vine_table. DataFrame
# vine_df = df.select([])
###Output
_____no_output_____
###Markdown
Connect to the AWS RDS instance and write each DataFrame to its table.
###Code
# Configure settings for RDS
mode = "append"
jdbc_url="jdbc:postgresql://<endpoint>:5432/<database name>"
config = {"user":"postgres",
"password": "<password>",
"driver":"org.postgresql.Driver"}
# Write review_id_df to table in RDS
review_id_df.write.jdbc(url=jdbc_url, table='review_id_table', mode=mode, properties=config)
# Write products_df to table in RDS
# about 3 min
products_df.write.jdbc(url=jdbc_url, table='products_table', mode=mode, properties=config)
# Write customers_df to table in RDS
# 5 min 14 s
customers_df.write.jdbc(url=jdbc_url, table='customers_table', mode=mode, properties=config)
# Write vine_df to table in RDS
# 11 minutes
vine_df.write.jdbc(url=jdbc_url, table='vine_table', mode=mode, properties=config)
###Output
_____no_output_____ |
components/outlier-detection/cifar10/cifar10_outlier.ipynb | ###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the environmental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN = "Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
# CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import json
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
import requests
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
def predict(X):
formData = {"instances": X.tolist()}
headers = {"Authorization": TOKEN}
res = requests.post(
"http://"
+ CLUSTER_IP
+ "/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict",
json=formData,
headers=headers,
)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ", res.status_code)
return []
def outlier(X):
formData = {"instances": X.tolist()}
headers = {
"Alibi-Detect-Return-Feature-Score": "true",
"Alibi-Detect-Return-Instance-Score": "true",
"ce-namespace": "default",
"ce-modelid": "cifar10",
"ce-type": "io.seldon.serving.inference.request",
"ce-id": "1234",
"ce-source": "localhost",
"ce-specversion": "1.0",
}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post("http://" + CLUSTER_IP + "/", json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ", res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx : idx + 1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(
X.reshape(1, 32, 32, 3),
mask_size=(10, 10),
n_masks=1,
channels=[0, 1, 2],
mask_type="normal",
noise_distr=(0, 1),
clip_rng=(0, 1),
)
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds, X_mask, X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: seldon-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
deployment.apps/hello-display unchanged
service/hello-display unchanged
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
seldondeployment.machinelearning.seldon.io/tfserving-cifar10 unchanged
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the enviromental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
secret/seldon-rclone-secret configured
service.serving.knative.dev/vae-outlier configured
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
trigger.eventing.knative.dev/vaeoutlier-trigger unchanged
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
172.18.255.1
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN="Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
#CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {"Authorization":TOKEN}
res = requests.post('http://'+CLUSTER_IP+'/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true", \
"ce-namespace": "default","ce-modelid":"cifar10","ce-type":"io.seldon.serving.inference.request", \
"ce-id":"1234","ce-source":"localhost","ce-specversion":"1.0"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier False
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier True
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
namespace "cifar10" deleted
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
###Output
gateway.networking.istio.io/seldon-gateway unchanged
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
deployment.apps/hello-display created
service/hello-display created
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
seldondeployment.machinelearning.seldon.io/tfserving-cifar10 created
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started.
###Code
%%writefile cifar10od.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.5.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
!kubectl apply -f cifar10od.yaml
###Output
service.serving.knative.dev/vae-outlier created
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
trigger.eventing.knative.dev/vaeoutlier-trigger created
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
35.240.18.201
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
#CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {}
res = requests.post('http://'+CLUSTER_IP+'/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true", \
"ce-namespace": "default","ce-modelid":"cifar10","ce-type":"io.seldon.serving.inference.request", \
"ce-id":"1234","ce-source":"localhost","ce-specversion":"1.0"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
WARNING: Logging before flag parsing goes to stderr.
E1110 11:58:05.548886 140347082585856 plot.py:39] Importing plotly failed. Interactive plots will not work.
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier False
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier True
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
namespace "cifar10" deleted
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the environmental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS = !(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP = CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN = "Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
# CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES = !(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD = SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import json
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
import requests
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
def predict(X):
formData = {"instances": X.tolist()}
headers = {"Authorization": TOKEN}
res = requests.post(
"http://"
+ CLUSTER_IP
+ "/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict",
json=formData,
headers=headers,
)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ", res.status_code)
return []
def outlier(X):
formData = {"instances": X.tolist()}
headers = {
"Alibi-Detect-Return-Feature-Score": "true",
"Alibi-Detect-Return-Instance-Score": "true",
"ce-namespace": "default",
"ce-modelid": "cifar10",
"ce-type": "io.seldon.serving.inference.request",
"ce-id": "1234",
"ce-source": "localhost",
"ce-specversion": "1.0",
}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post("http://" + CLUSTER_IP + "/", json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ", res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx : idx + 1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res = !kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data = []
for i in range(0, len(res)):
if res[i] == "Data,":
data.append(res[i + 1])
j = json.loads(json.loads(data[0]))
print("Outlier", j["data"]["is_outlier"] == [1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(
X.reshape(1, 32, 32, 3),
mask_size=(10, 10),
n_masks=1,
channels=[0, 1, 2],
mask_type="normal",
noise_distr=(0, 1),
clip_rng=(0, 1),
)
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res = !kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data = []
for i in range(0, len(res)):
if res[i] == "Data,":
data.append(res[i + 1])
j = json.loads(json.loads(data[-1]))
print("Outlier", j["data"]["is_outlier"] == [1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds, X_mask, X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE with Knative 0.13 and Istio 1.3.1
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources Enabled eventing on default namespace. This will activate a default Knative Broker.
###Code
!kubectl label namespace default knative-eventing-injection=enabled
###Output
_____no_output_____
###Markdown
Create a Knative service to log events it receives. This will be the example final sink for outlier events.
###Code
!pygmentize message-dumper.yaml
!kubectl apply -f message-dumper.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
!pygmentize cifar10.yaml
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started.
###Code
!pygmentize cifar10od.yaml
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
!pygmentize trigger.yaml
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
SERVICE_HOSTNAMES=!(kubectl get ksvc vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {}
res = requests.post('http://'+CLUSTER_IP+'/seldon/default/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete -f cifar10.yaml
!kubectl delete -f cifar10od.yaml
!kubectl delete -f trigger.yaml
!kubectl delete -f message-dumper.yaml
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the enviromental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN = "Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
# CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import json
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
import requests
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
def predict(X):
formData = {"instances": X.tolist()}
headers = {"Authorization": TOKEN}
res = requests.post(
"http://"
+ CLUSTER_IP
+ "/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict",
json=formData,
headers=headers,
)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ", res.status_code)
return []
def outlier(X):
formData = {"instances": X.tolist()}
headers = {
"Alibi-Detect-Return-Feature-Score": "true",
"Alibi-Detect-Return-Instance-Score": "true",
"ce-namespace": "default",
"ce-modelid": "cifar10",
"ce-type": "io.seldon.serving.inference.request",
"ce-id": "1234",
"ce-source": "localhost",
"ce-specversion": "1.0",
}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post("http://" + CLUSTER_IP + "/", json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ", res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx : idx + 1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(
X.reshape(1, 32, 32, 3),
mask_size=(10, 10),
n_masks=1,
channels=[0, 1, 2],
mask_type="normal",
noise_distr=(0, 1),
clip_rng=(0, 1),
)
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds, X_mask, X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE with Knative 0.13 and Istio 1.3.1
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources Enabled eventing on default namespace. This will activate a default Knative Broker.
###Code
!kubectl label namespace default knative-eventing-injection=enabled
###Output
_____no_output_____
###Markdown
Create a Knative service to log events it receives. This will be the example final sink for outlier events.
###Code
!pygmentize message-dumper.yaml
!kubectl apply -f message-dumper.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
!pygmentize cifar10.yaml
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started.
###Code
!pygmentize cifar10od.yaml
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
!pygmentize trigger.yaml
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
SERVICE_HOSTNAMES=!(kubectl get ksvc vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {}
res = requests.post('http://'+CLUSTER_IP+'/seldon/default/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete -f cifar10.yaml
!kubectl delete -f cifar10od.yaml
!kubectl delete -f trigger.yaml
!kubectl delete -f message-dumper.yaml
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the environmental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS = !(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP = CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN = "Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
# CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES = !(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD = SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import json
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
import requests
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
def predict(X):
formData = {"instances": X.tolist()}
headers = {"Authorization": TOKEN}
res = requests.post(
"http://"
+ CLUSTER_IP
+ "/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict",
json=formData,
headers=headers,
)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ", res.status_code)
return []
def outlier(X):
formData = {"instances": X.tolist()}
headers = {
"Alibi-Detect-Return-Feature-Score": "true",
"Alibi-Detect-Return-Instance-Score": "true",
"ce-namespace": "default",
"ce-modelid": "cifar10",
"ce-type": "io.seldon.serving.inference.request",
"ce-id": "1234",
"ce-source": "localhost",
"ce-specversion": "1.0",
}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post("http://" + CLUSTER_IP + "/", json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ", res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx : idx + 1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(
X.reshape(1, 32, 32, 3),
mask_size=(10, 10),
n_masks=1,
channels=[0, 1, 2],
mask_type="normal",
noise_distr=(0, 1),
clip_rng=(0, 1),
)
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds, X_mask, X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
_____no_output_____
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
###Output
gateway.networking.istio.io/seldon-gateway unchanged
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
deployment.apps/hello-display created
service/hello-display created
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
seldondeployment.machinelearning.seldon.io/tfserving-cifar10 created
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started.
###Code
%%writefile cifar10od.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.5.0
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
!kubectl apply -f cifar10od.yaml
###Output
service.serving.knative.dev/vae-outlier created
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
trigger.eventing.knative.dev/vaeoutlier-trigger created
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
34.77.158.93
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN="Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
#CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {"Authorization":TOKEN}
res = requests.post('http://'+CLUSTER_IP+'/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true", \
"ce-namespace": "default","ce-modelid":"cifar10","ce-type":"io.seldon.serving.inference.request", \
"ce-id":"1234","ce-source":"localhost","ce-specversion":"1.0"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier False
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
Outlier True
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
namespace "cifar10" deleted
###Markdown
Cifar10 Outlier DetectionIn this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` Tested on GKE and Kind with Knative 0.18 and Istio 1.7.3
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure istio gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
!cat ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: broker
metadata:
name: default
namespace: cifar10
!kubectl create -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
_____no_output_____
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: tfserving-cifar10
namespace: cifar10
spec:
protocol: tensorflow
transport: rest
predictors:
- componentSpecs:
- spec:
containers:
- args:
- --port=8500
- --rest_api_port=8501
- --model_name=resnet32
- --model_base_path=gs://seldon-models/tfserving/cifar10/resnet32
image: tensorflow/serving
name: resnet32
ports:
- containerPort: 8501
name: http
protocol: TCP
graph:
name: resnet32
type: MODEL
endpoint:
service_port: 8501
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10/default
name: model
replicas: 1
!kubectl apply -f cifar10.yaml
###Output
_____no_output_____
###Markdown
Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started. Here we configure `seldonio/alibi-detect-server` to use rclone for downloading the artifact. If `RCLONE_ENABLED=true` environmental variable is set or any of the enviromental variables contain `RCLONE_CONFIG` in their name then rclonewill be used to download the artifacts. If `RCLONE_ENABLED=false` or no `RCLONE_CONFIG` variables are present then kfserving storage.py logic will be used to download the artifacts.
###Code
%%writefile cifar10od.yaml
apiVersion: v1
kind: Secret
metadata:
name: seldon-rclone-secret
namespace: cifar10
type: Opaque
stringData:
RCLONE_CONFIG_GS_TYPE: google cloud storage
RCLONE_CONFIG_GS_ANONYMOUS: "true"
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: vae-outlier
namespace: cifar10
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.8.0-dev
imagePullPolicy: IfNotPresent
args:
- --model_name
- cifar10od
- --http_port
- '8080'
- --protocol
- tensorflow.http
- --storage_uri
- gs://seldon-models/alibi-detect/od/OutlierVAE/cifar10
- --reply_url
- http://hello-display.cifar10
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.cifar10od
- OutlierDetector
envFrom:
- secretRef:
name: seldon-rclone-secret
!kubectl apply -f cifar10od.yaml
###Output
_____no_output_____
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: vaeoutlier-trigger
namespace: cifar10
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: vae-outlier
namespace: cifar10
!kubectl apply -f trigger.yaml
###Output
_____no_output_____
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN="Bearer <my token>"
###Output
_____no_output_____
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
#CLUSTER_IP="localhost:8004"
SERVICE_HOSTNAMES=!(kubectl get ksvc -n cifar10 vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {"Authorization":TOKEN}
res = requests.post('http://'+CLUSTER_IP+'/seldon/cifar10/tfserving-cifar10/v1/models/resnet32/:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true", \
"ce-namespace": "default","ce-modelid":"cifar10","ce-type":"io.seldon.serving.inference.request", \
"ce-id":"1234","ce-source":"localhost","ce-specversion":"1.0"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
headers["Authorization"] = TOKEN
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
###Output
_____no_output_____
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Lets check the message dumper for an outlier detection prediction. This should be false.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
Outlier Prediction
###Code
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
###Output
_____no_output_____
###Markdown
Now lets check the message dumper for a new message. This should show we have found an outlier.
###Code
res=!kubectl logs -n cifar10 $(kubectl get pod -n cifar10 -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[-1]))
print("Outlier",j["data"]["is_outlier"]==[1])
###Output
_____no_output_____
###Markdown
We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
###Code
od_preds = outlier(X_mask)
###Output
_____no_output_____
###Markdown
We now plot those feature scores returned by the outlier detector along with our original image.
###Code
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
###Output
_____no_output_____
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10
###Output
_____no_output_____ |
examples/count/sweep-gregor-i5-12.ipynb | ###Markdown
When benchmarking you **MUST**1. close all applications2. close docker3. close all but this Web windows4. all pen editors other than jupyter-lab (this notebook)
###Code
import os
from cloudmesh.common.Shell import Shell
from pprint import pprint
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import pandas as pd
from tqdm.notebook import tqdm
from cloudmesh.common.util import readfile
from cloudmesh.common.util import writefile
from cloudmesh.common.StopWatch import StopWatch
from cloudmesh.common.systeminfo import systeminfo
import ipywidgets as widgets
sns.set_theme(style="whitegrid")
info = systeminfo()
user = info["user"]
node = info["uname.node"]
processors = 4
# Parameters
user = "gregor"
node = "i5"
processors = 12
p = widgets.IntSlider(
value=processors,
min=2,
max=64,
step=1,
description='Processors:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
u = widgets.Text(value=user, placeholder='The user name', description='User:', disabled=False)
n = widgets.Text(value=node, placeholder='The computer name', description='Computer:', disabled=False)
display(p)
display(u)
display(n)
processors = p.value
user = u.value
node = n.value
print (processors, user, node)
experiments = 10
maximum = 1024 * 100000
intervals = 10
label = f"{user}-{node}-{processors}"
output = f"benchmark/{user}"
delta = int(maximum / intervals)
totals = [64] + list(range(0,maximum, delta))[1:]
points = [int(t/processors) for t in totals]
print (totals)
print(points)
os.makedirs(output, exist_ok=True)
systeminfo = StopWatch.systeminfo({"user": user, "uname.node": node})
writefile(f"{output}/{label}-sysinfo.log", systeminfo)
print (systeminfo)
df = pd.DataFrame(
{"Size": points}
)
df = df.set_index('Size')
experiment_progress = tqdm(range(0, experiments), desc ="Experiment")
experiment = -1
for experiment in experiment_progress:
exoeriment = experiment + 1
log = f"{output}/{label}-{experiment}-log.log"
os.system(f"rm {log}")
name = points[experiment]
progress = tqdm(range(0, len(points)),
desc =f"Benchmark {name}",
bar_format="{desc:<30} {total_fmt} {r_bar}")
i = -1
for state in progress:
i = i + 1
n = points[i]
#if linux, os:
command = f"mpiexec -n {processors} python count-click.py " + \
f"--n {n} --max_number 10 --find 8 --label {label} " + \
f"--user {user} --node={node} " + \
f"| tee -a {log}"
#if windows:
#command = f"mpiexec -n {processors} python count-click.py " + \
# f"--n {n} --max_number 10 --find 8 --label {label} " + \
# f"--user {user} --node={node} " + \
# f">> {log}"
os.system (command)
content = readfile(log).splitlines()
lines = Shell.cm_grep(content, "csv,Result:")
# print(lines)
values = []
times = []
for line in lines:
msg = line.split(",")[7]
t = line.split(",")[4]
total, overall, trials, find, label = msg.split(" ")
values.append(int(overall))
times.append(float(t))
# print (t, overall)
#data = pd.DataFrame(values, times, columns=["Values", "Time"])
#print (data.describe())
#sns.lineplot(data=data, palette="tab10", linewidth=2.5)
# df["Size"] = values
df[f"Time_{experiment}"] = times
# print(df)
df = df.rename_axis(columns="Time")
df
sns.lineplot(data=df, markers=True);
plt.savefig(f'{output}/{label}-line.png');
plt.savefig(f'{output}/{label}-line.pdf');
dfs = df.stack().reset_index()
dfs = dfs.set_index('Size')
dfs = dfs.drop(columns=['Time'])
dfs = dfs.rename(columns={0:'Time'})
dfs
sns.scatterplot(data=dfs, x="Size", y="Time");
plt.savefig(f"{output}/{label}-scatter.pdf")
plt.savefig(f"{output}/{label}-scatter.png")
sns.relplot(x="Size", y="Time", kind="line", data=dfs);
plt.savefig(f"{output}/{label}-relplot.pdf")
plt.savefig(f"{output}/{label}-relplot.png")
df.to_pickle(f"{output}/{label}-df.pkl")
###Output
_____no_output_____ |
docs/tutorial/03-model-serving.ipynb | ###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for creating, deploying, and testinga model-serving function ("a serving function" a.k.a. "a model server") using MLRun Serving and Nuclio runtimes.MLRun serving can take various tasks including MLRun models or standard model files and produce managed real-timeserverless pipelines based on the Nuclio real-time serverless engine, which can be deployed everywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle, including auto generation of microservices,APIs, load balancing, model logging and monitoring, and configuration management.MLRun Serving support more advanced real-time data processing and model serving pipelines,for more details and examples see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
from os import path
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `set_environment` MLRun method to configure the working environment and default configuration.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-set-mlrun-envr) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started-tutorial'
# Initialize the MLRun environment and save the project name and artifacts path
project_name, artifact_path = mlrun.set_environment(project=project_name_base,
user_project=True)
###Output
_____no_output_____
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# nuclio: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
from mlrun import code_to_function
serving_fn = code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = f'store://{project_name}/train-iris-train_iris_model'
serving_fn.add_model('my_model',model_path=model_file)
from mlrun.platforms import auto_mount
serving_fn = serving_fn.apply(auto_mount())
###Output
_____no_output_____
###Markdown
Test Our Function LocallyCreate a test server (mock server) and test it with sample data
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
_____no_output_____
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-01-25 08:40:23,461 [info] Starting remote function deploy
2021-01-25 08:40:23 (info) Deploying function
2021-01-25 08:40:23 (info) Building
2021-01-25 08:40:23 (info) Staging files and preparing base images
2021-01-25 08:40:23 (info) Building processor image
2021-01-25 08:40:24 (info) Build complete
2021-01-25 08:40:30 (info) Function deploy complete
> 2021-01-25 08:40:31,117 [info] function deployed, address=default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
_____no_output_____
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
from os import path
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `set_environment` MLRun method to configure the working environment and default configuration.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-set-mlrun-envr) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started-tutorial'
# Initialize the MLRun environment and save the project name and artifacts path
project_name, artifact_path = mlrun.set_environment(project=project_name_base,
user_project=True)
###Output
_____no_output_____
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# nuclio: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
from mlrun import code_to_function
serving_fn = code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = f'store://{project_name}/train-iris-train_iris_model'
serving_fn.add_model('my_model',model_path=model_file)
from mlrun.platforms import auto_mount
serving_fn = serving_fn.apply(auto_mount())
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
_____no_output_____
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-01-25 08:40:23,461 [info] Starting remote function deploy
2021-01-25 08:40:23 (info) Deploying function
2021-01-25 08:40:23 (info) Building
2021-01-25 08:40:23 (info) Staging files and preparing base images
2021-01-25 08:40:23 (info) Building processor image
2021-01-25 08:40:24 (info) Build complete
2021-01-25 08:40:30 (info) Function deploy complete
> 2021-01-25 08:40:31,117 [info] function deployed, address=default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
_____no_output_____
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `get_or_create_project` MLRun method to create a new project or fetch it from the DB/repository if it already exists.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-mlrun-envr-init) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started'
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name_base, context="./", user_project=True)
###Output
> 2022-02-08 19:57:17,874 [info] loaded project getting-started from MLRun DB
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# mlrun: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# mlrun: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
serving_fn = mlrun.code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = project.get_artifact_uri('my_model')
serving_fn.add_model('my_model',model_path=model_file)
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
> 2022-02-08 19:58:44,716 [info] model my_model was loaded
> 2022-02-08 19:58:44,716 [info] Loaded ['my_model']
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2022-02-08 19:58:50,645 [info] Starting remote function deploy
2022-02-08 19:58:51 (info) Deploying function
2022-02-08 19:58:51 (info) Building
2022-02-08 19:58:52 (info) Staging files and preparing base images
2022-02-08 19:58:52 (info) Building processor image
2022-02-08 19:59:47 (info) Build complete
> 2022-02-08 19:59:52,828 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-getting-started-admin-serving.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['getting-started-admin-serving-getting-started-admin.default-tenant.app.yh41.iguazio-cd1.com/']}
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is http://getting-started-admin-serving-getting-started-admin.default-tenant.app.yh41.iguazio-cd1.com/
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
> 2022-02-08 19:59:53,584 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-getting-started-admin-serving.default-tenant.svc.cluster.local:8080/v2/models/my_model/infer'}
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `get_or_create_project` MLRun method to create a new project or fetch it from the DB/repository if it already exists.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-set-mlrun-envr) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started'
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name_base, context="./", user_project=True)
###Output
> 2021-09-09 05:20:36,857 [info] loaded project getting-started from MLRun DB
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# mlrun: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# mlrun: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
serving_fn = mlrun.code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = project.get_artifact_uri('my_model')
serving_fn.add_model('my_model',model_path=model_file)
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
> 2021-09-09 05:20:41,738 [info] model my_model was loaded
> 2021-09-09 05:20:41,739 [info] Initializing endpoint records
> 2021-09-09 05:20:41,798 [info] Loaded ['my_model']
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-09-09 05:20:41,815 [info] Starting remote function deploy
2021-09-09 05:20:42 (info) Deploying function
2021-09-09 05:20:42 (info) Building
2021-09-09 05:20:42 (info) Staging files and preparing base images
2021-09-09 05:20:42 (info) Building processor image
2021-09-09 05:20:43 (info) Build complete
2021-09-09 05:20:49 (info) Function deploy complete
> 2021-09-09 05:20:50,139 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-getting-started-iguazio-serving.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['getting-started-iguazio-serving-getting-started-iguazio.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is http://getting-started-iguazio-serving-getting-started-iguazio.default-tenant.app.jnewriujxdig.iguazio-cd1.com/
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
> 2021-09-09 05:20:50,904 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-getting-started-iguazio-serving.default-tenant.svc.cluster.local:8080/v2/models/my_model/infer'}
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `get_or_create_project` MLRun method to create a new project or fetch it from the DB/repository if it already exists.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-mlrun-envr-init) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started'
# Initialize the MLRun project object
project = mlrun.get_or_create_project(project_name_base, context="./", user_project=True)
###Output
> 2021-09-09 05:20:36,857 [info] loaded project getting-started from MLRun DB
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# mlrun: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# mlrun: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
serving_fn = mlrun.code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = project.get_artifact_uri('my_model')
serving_fn.add_model('my_model',model_path=model_file)
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
> 2021-09-09 05:20:41,738 [info] model my_model was loaded
> 2021-09-09 05:20:41,739 [info] Initializing endpoint records
> 2021-09-09 05:20:41,798 [info] Loaded ['my_model']
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-09-09 05:20:41,815 [info] Starting remote function deploy
2021-09-09 05:20:42 (info) Deploying function
2021-09-09 05:20:42 (info) Building
2021-09-09 05:20:42 (info) Staging files and preparing base images
2021-09-09 05:20:42 (info) Building processor image
2021-09-09 05:20:43 (info) Build complete
2021-09-09 05:20:49 (info) Function deploy complete
> 2021-09-09 05:20:50,139 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-getting-started-iguazio-serving.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['getting-started-iguazio-serving-getting-started-iguazio.default-tenant.app.jnewriujxdig.iguazio-cd1.com/']}
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is http://getting-started-iguazio-serving-getting-started-iguazio.default-tenant.app.jnewriujxdig.iguazio-cd1.com/
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
> 2021-09-09 05:20:50,904 [info] invoking function: {'method': 'POST', 'path': 'http://nuclio-getting-started-iguazio-serving.default-tenant.svc.cluster.local:8080/v2/models/my_model/infer'}
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
from os import path
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `set_environment` MLRun method to configure the working environment and default configuration.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-set-mlrun-envr) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started-tutorial'
# Initialize the MLRun environment and save the project name and artifacts path
project_name, artifact_path = mlrun.set_environment(project=project_name_base,
user_project=True)
###Output
_____no_output_____
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# mlrun: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# mlrun: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
from mlrun import code_to_function
serving_fn = code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = f'store://{project_name}/train-iris-train_iris_model'
serving_fn.add_model('my_model',model_path=model_file)
from mlrun.platforms import auto_mount
serving_fn = serving_fn.apply(auto_mount())
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
_____no_output_____
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-01-25 08:40:23,461 [info] Starting remote function deploy
2021-01-25 08:40:23 (info) Deploying function
2021-01-25 08:40:23 (info) Building
2021-01-25 08:40:23 (info) Staging files and preparing base images
2021-01-25 08:40:23 (info) Building processor image
2021-01-25 08:40:24 (info) Build complete
2021-01-25 08:40:30 (info) Function deploy complete
> 2021-01-25 08:40:31,117 [info] function deployed, address=default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
_____no_output_____
###Markdown
Part 3: Serving an ML ModelThis part of the MLRun getting-started tutorial walks you through the steps for implementing ML model serving using MLRun serving and Nuclio runtimes.The tutorial walks you through the steps for creating, deploying, and testing a model-serving function ("a serving function" a.k.a. "a model server").MLRun serving can produce managed real-time serverless pipelines from various tasks, including MLRun models or standard model files.The pipelines use the Nuclio real-time serverless engine, which can be deployed anywhere.[Nuclio](https://nuclio.io/) is a high-performance open-source "serverless" framework that's focused on data, I/O, and compute-intensive workloads.Simple model serving classes can be written in Python or be taken from a set of pre-developed ML/DL classes.The code can handle complex data, feature preparation, and binary data (such as images and video files).The Nuclio serving engine supports the full model-serving life cycle;this includes auto generation of microservices, APIs, load balancing, model logging and monitoring, and configuration management.MLRun serving supports more advanced real-time data processing and model serving pipelines.For more details and examples, see the [MLRun Serving Graphs](../serving/serving-graph.md) documentation.The tutorial consists of the following steps:1. [Setup and Configuration](gs-tutorial-3-step-setup) — load your project2. [Writing A Simple Serving Class](gs-tutorial-3-step-writing-a-serving-class)3. [Deploying the Model-Serving Function (Service)](gs-tutorial-3-step-deploy-the-serving-function)4. [Using the Live Model-Serving Function](gs-tutorial-3-step-using-the-live-serving-function)5. [Viewing the Nuclio Serving Function on the Dashboard](gs-tutorial-3-step-view-serving-func-in-ui)By the end of this tutorial you'll learn how to- Create model-serving functions.- Deploy models at scale.- Test your deployed models. PrerequisitesThe following steps are a continuation of the previous parts of this getting-started tutorial and rely on the generated outputs.Therefore, make sure to first run parts 1—[2](02-model-training.ipynb) of the tutorial. Step 1: Setup and Configuration Importing LibrariesRun the following code to import required libraries:
###Code
from os import path
import mlrun
###Output
_____no_output_____
###Markdown
Initializing Your MLRun EnvironmentUse the `set_environment` MLRun method to configure the working environment and default configuration.Set the `project` and `user_project` parameters to the same values that you used in the call to this method in the [Part 1: MLRun Basics](./01-mlrun-basics.ipynbgs-tutorial-1-set-mlrun-envr) tutorial notebook.
###Code
# Set the base project name
project_name_base = 'getting-started-tutorial'
# Initialize the MLRun environment and save the project name and artifacts path
project_name, artifact_path = mlrun.set_environment(project=project_name_base,
user_project=True)
###Output
_____no_output_____
###Markdown
Step 2: Writing A Simple Serving ClassThe serving class is initialized automatically by the model server.All you need is to implement two mandatory methods:- `load` — downloads the model files and loads the model into memory. This can be done either synchronously or asynchronously.- `predict` — accepts a request payload and returns prediction (inference) results.For more detailed information on serving classes, see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.md).The following code demonstrates a minimal scikit-learn (a.k.a. sklearn) serving-class implementation:
###Code
# nuclio: start-code
from cloudpickle import load
import numpy as np
from typing import List
import mlrun
class ClassifierModel(mlrun.serving.V2ModelServer):
def load(self):
"""load and initialize the model and/or other elements"""
model_file, extra_data = self.get_model('.pkl')
self.model = load(open(model_file, 'rb'))
def predict(self, body: dict) -> List:
"""Generate model predictions from sample."""
feats = np.asarray(body['inputs'])
result: np.ndarray = self.model.predict(feats)
return result.tolist()
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Step 3: Deploying the Model-Serving Function (Service)To provision (deploy) a function for serving the model ("a serving function") you need to create an MLRun function of type `serving`.You can do this by using the `code_to_function` MLRun method from a web notebook, or by importing an existing serving function or template from the MLRun functions marketplace. Converting a Serving Class to a Serving FunctionThe following code converts the `ClassifierModel` class that you defined in the previous step to a serving function.The name of the class to be used by the serving function is set in `spec.default_class`.
###Code
from mlrun import code_to_function
serving_fn = code_to_function('serving', kind='serving',image='mlrun/mlrun')
serving_fn.spec.default_class = 'ClassifierModel'
###Output
_____no_output_____
###Markdown
Add the model created in previous notebook by the training function
###Code
model_file = f'store://{project_name}/train-iris-train_iris_model'
serving_fn.add_model('my_model',model_path=model_file)
from mlrun.platforms import auto_mount
serving_fn = serving_fn.apply(auto_mount())
###Output
_____no_output_____
###Markdown
Testing Your Function LocallyTo test your function locally, create a test server (mock server) and test it with sample data.
###Code
my_data = '''{"inputs":[[5.1, 3.5, 1.4, 0.2],[7.7, 3.8, 6.7, 2.2]]}'''
server = serving_fn.to_mock_server()
server.test("/v2/models/my_model/infer", body=my_data)
###Output
_____no_output_____
###Markdown
Building and Deploying the Serving FunctionUse the `deploy` method of the MLRun serving function to build and deploy a Nuclio serving function from your serving-function code.
###Code
function_address = serving_fn.deploy()
###Output
> 2021-01-25 08:40:23,461 [info] Starting remote function deploy
2021-01-25 08:40:23 (info) Deploying function
2021-01-25 08:40:23 (info) Building
2021-01-25 08:40:23 (info) Staging files and preparing base images
2021-01-25 08:40:23 (info) Building processor image
2021-01-25 08:40:24 (info) Build complete
2021-01-25 08:40:30 (info) Function deploy complete
> 2021-01-25 08:40:31,117 [info] function deployed, address=default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
###Markdown
Step 4: Using the Live Model-Serving FunctionAfter the function is deployed successfully, the serving function has a new HTTP endpoint for handling serving requests.The example tutorial serving function receives HTTP prediction (inference) requests on this endpoint;calls the `infer` method to get the requested predictions; and returns the results on the same endpoint.
###Code
print (f'The address for the function is {function_address} \n')
!curl $function_address
###Output
The address for the function is default-tenant.app.aefccdjffbit.iguazio-cd0.com:31805
{"name": "ModelRouter", "version": "v2", "extensions": []}
###Markdown
Testing the Model ServerTest your model server by sending data for inference.The `invoke` serving-function method enables programmatic testing of the serving function.For model inference (predictions), specify the model name followed by `infer`:```/v2/models/{model_name}/infer```For complete model-service API commands — such as for list models (`models`), get model health (`ready`), and model explanation (`explain`) — see the [MLRun documentation](https://github.com/mlrun/mlrun/blob/release/v0.6.x-latest/mlrun/serving/README.mdmodel-server-api).
###Code
serving_fn.invoke('/v2/models/my_model/infer', my_data)
###Output
_____no_output_____ |
deeplearning.ai - TensorFlow in Practice Specialization/deeplearning.ai - Natural Language Processing in TensorFlow/module1- Sentiment in text/Lesson_2.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
sentences = [
'I love my dog',
'I love my cat',
'You love my dog!',
'Do you think my dog is amazing?'
]
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, maxlen=5)
print("\nWord Index = " , word_index)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)
# Try with words that the tokenizer wasn't fit to
test_data = [
'i really love my dog',
'my dog loves my manatee'
]
test_seq = tokenizer.texts_to_sequences(test_data)
print("\nTest Sequence = ", test_seq)
padded = pad_sequences(test_seq, maxlen=10)
print("\nPadded Test Sequence: ")
print(padded)
###Output
Word Index = {'<OOV>': 1, 'my': 2, 'love': 3, 'dog': 4, 'i': 5, 'you': 6, 'cat': 7, 'do': 8, 'think': 9, 'is': 10, 'amazing': 11}
Sequences = [[5, 3, 2, 4], [5, 3, 2, 7], [6, 3, 2, 4], [8, 6, 9, 2, 4, 10, 11]]
Padded Sequences:
[[ 0 5 3 2 4]
[ 0 5 3 2 7]
[ 0 6 3 2 4]
[ 9 2 4 10 11]]
Test Sequence = [[5, 1, 3, 2, 4], [2, 4, 1, 2, 1]]
Padded Test Sequence:
[[0 0 0 0 0 5 1 3 2 4]
[0 0 0 0 0 2 4 1 2 1]]
|
2. ITW2/05_Prefix_Infix_and_Postfix.ipynb | ###Markdown
Wap to evaluate any one of following as valid or invalid.* Postfix eg. ((AB*)(CD/)+)* Infix eg. ((A*B)+(C/D))* Prefix eg. (+(*AB)(/CD)) Wap to convert an:* infix to postfix and vice versa* infix to prefix and vice versa* prefix to postfix expression and vice versa
###Code
# Check for operator
def is_operator(temp):
if temp in '+-*/':
return True
return False
# Check for bracket
def is_bracket(temp):
if temp in '()':
return True
return False
# Check for operator
def is_operand(temp):
if temp in 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ':
return True
return False
# Validate prefix
def v_prefix(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
temp_1 = s.pop(-4)
temp_2 = s.pop(-3)
temp_3 = s.pop(-2)
temp_4 = s.pop()
s.append('x')
if all([is_operator(temp_2), is_operand(temp_3), is_operand(temp_4), temp_1 == '(', i == ')']):
continue
else:
return False
if len(s) == 1:
return True
else:
return False
a = input()
print(v_prefix(a))
#Validate infix
def v_infix(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
temp_1 = s.pop(-4)
temp_2 = s.pop(-3)
temp_3 = s.pop(-2)
temp_4 = s.pop()
s.append('x')
if all([is_operand(temp_2), is_operator(temp_3), is_operand(temp_4), temp_1 == '(', i == ')']):
continue
else:
return False
if len(s) == 1:
return True
else:
return False
a = input()
print(v_infix(a))
#Validate postfix
def v_postfix(temp):
a = []
for i in temp:
if is_bracket(i):
continue
if not is_operator(i):
a.append(i)
else:
temp = str(a.pop(-2))
temp += str(a.pop())
temp += i
a.append(temp)
if len(a) == 1:
return True
return False
a = input()
print(v_postfix(a))
###Output
_____no_output_____
###Markdown
Conversions
###Code
def pre_in(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop(-2)
a += s.pop(-2)
a += s.pop()
a += i
s.append(a)
return str(s[0])
a = input()
print(pre_in(a))
def pre_post(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop(-2)
a += s.pop()
a += s.pop()
a += i
s.append(a)
return str(s[0])
a = input()
print(pre_post(a))
def in_post(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop(-3)
a += s.pop()
a += s.pop()
a += i
s.append(a)
return str(s[0])
a = input()
print(in_post(a))
def in_pre(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop(-2)
a += s.pop(-2)
a += s.pop()
a += i
s.append(a)
return str(s[0])
a = input()
print(in_pre(a))
def post_pre(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop()
a += s.pop(-2)
a += s.pop()
a += i
s.append(a)
return str(s[0])
def post_in(temp):
s = []
for i in temp:
if i != ')':
s.append(i)
elif i == ')':
a = ''
a += s.pop(-4)
a += s.pop(-3)
a += s.pop()
a += s.pop()
a += i
s.append(a)
return str(s[0])
a = input()
print(post_pre(a), post_in(a))
###Output
_____no_output_____ |
covid_vs_normal.ipynb | ###Markdown
Data Processessing
###Code
# counting the amount of x rays images
len_normal_train = len([iq for iq in os.scandir('xrays/covidvsnormal/train/normal')])
len_normal_test = len([iq for iq in os.scandir('xrays/covidvsnormal/test/normal')])
len_normal_val = len([iq for iq in os.scandir('xrays/covidvsnormal/val/normal')])
len_covid_train = len([iq for iq in os.scandir('xrays/covidvsnormal/train/covid')])
len_covid_val = len([iq for iq in os.scandir('xrays/covidvsnormal/val/covid')])
len_covid_test = len([iq for iq in os.scandir('xrays/covidvsnormal/test/covid')])
len_train_total = len_normal_train + len_covid_train
len_val_total = len_covid_val + len_normal_val
len_test_total = len_normal_test + len_covid_test
print("Total")
print("---------------------")
print ("normal: ", len_normal_train)
print ("normal no val: ", len_normal_val)
print ("normal no test: ", len_normal_test)
print("---------------------")
print ("covid no train: ", len_covid_train)
print ("covid no val: ", len_covid_val)
print ("covid no test: ", len_covid_test)
print()
print("Total")
print("---------------------")
print("total train: ", len_train_total)
print("total val: ", len_val_total)
print("total test: ", len_test_total)
# extracting the images
DIR_NAME = 'xrays/covidvsnormal/'
imagePaths=[]
for dirname, _, filenames in os.walk(DIR_NAME):
for filename in filenames:
imagePaths.append(os.path.join(dirname, filename))
# verifying if the images have been extracted
imagePaths
# assigining the labels to the images
data = []
labels = []
# loop over the image paths
for imagePath in imagePaths:
# extract the class label from the filename
label = imagePath.split(os.path.sep)[-2]
# load the image, swap color channels, and resize it to be a fixed
# 224x224 pixels while ignoring aspect ratio
image = cv2.imread(imagePath)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (64, 64))
# update the data and labels lists, respectively
data.append(image)
labels.append(label)
# convert the data and labels to NumPy arrays while scaling the pixel
# intensities to the range [0, 1]
data = np.array(data) / 255.0
labels = np.array(labels)
# verifying the shape
data.shape
# verifying the labels
labels
# perform one-hot encoding on the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(x_train, x_test, y_train, y_test) = train_test_split(data, labels,
test_size=0.20, stratify=labels, random_state=42)
###Output
_____no_output_____
###Markdown
Data Augmentation
###Code
# Data Augmentation
def process_data(x_train,y_train, x_test,y_test, batch_size):
#to prevent overfitting
train_datagen = ImageDataGenerator(shear_range=0.2,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
vertical_flip=False,
zoom_range=0.2)
validation_datagen = ImageDataGenerator()
train_generator = train_datagen.flow(x_train,y_train,
batch_size=batch_size)
validation_generator = validation_datagen.flow(x_test,y_test,
batch_size=batch_size)
return train_generator, validation_generator
###Output
_____no_output_____
###Markdown
Transfer Learning
###Code
# adding top layer
def addTopModel(bottom_model, num_classes):
top_model = bottom_model.output
top_model = MaxPooling2D(pool_size=(2,2), strides=2)(top_model)
top_model = Flatten(name="flatten")(top_model)
top_model = Dense(512, activation="relu")(top_model)
top_model = Dropout(0.5)(top_model)
top_model = Dense(2, activation='sigmoid')(top_model)
model = Model(inputs=bottom_model.input, outputs=top_model)
return model
def get_model(transferleaner,x_train,y_train, x_test,y_test):
m = transferleaner
pred = addTopModel(m, num_classes)
#data augmentation
train_generator, validation_generator = process_data(x_train,y_train,
x_test,y_test, batch_size)
model.summary()
checkpoint = ModelCheckpoint(data_dir+'modelcnd'+'.h5',
monitor='val_loss',
mode="min",
save_best_only=True,
verbose=1)
earlystop = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=1,
restore_best_weights=True)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=30,
verbose=1,
factor=0.8,
min_lr=0.0001,
mode="auto",
min_delta=0.0001,
cooldown=5)
callbacks = [checkpoint, earlystop, learning_rate_reduction]
model.compile(loss=models_loss,
optimizer=models_opt,
metrics=['accuracy'])
model.fit_generator(train_generator,
steps_per_epoch=len_train_total//batch_size,
epochs=epochs,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=len_val_total//batch_size)
history = model.fit_generator(train_generator,
steps_per_epoch=len_train_total//batch_size,
epochs=epochs,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=len_val_total//batch_size)
return model, history
# build sequencial model
def build_model(bottom_model, num_classes):
top_model = bottom_model.output
top_model = MaxPooling2D(pool_size=(2,2), strides=2)(top_model)
top_model = Flatten(name="flatten")(top_model)
top_model = Dense(2, activation='sigmoid')(top_model)
model = Model(inputs=bottom_model.input, outputs=top_model)
return model
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
# plotting confusion matrix for testing
def plot_confusion_matrix(model, x_test, y_test):
fig, ax = plt.subplots(figsize=(8,6))
classes = ['COVID','NORMAL']
y_pred = model.predict(x_test, batch_size=batch_size)
y_pred = np.argmax(y_pred, axis=1)
y_test = np.argmax(y_test, axis=1)
print('Confusion Matrix')
cm = confusion_matrix(y_test, y_pred, normalize='true')
#cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.heatmap(cm,cmap='Purples',fmt='g', annot=True)
tick_marks = [0.5,1.5]
plt.xticks(tick_marks, classes)
plt.yticks(tick_marks, classes)
plt.title('Confusion Matrix - Normalized')
plt.ylabel('True label')
plt.xlabel('Predicted label')
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.show()
acc = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred, average=None)
recall = recall_score(y_test, y_pred, average=None)
f1 = f1_score(y_test, y_pred, average=None)
print("Precision Score: {}".format(precision))
print("Recall Score: {}".format(recall))
print("F1 Score: {}".format(f1))
print("Accuracy Score: {}".format(acc))
return plt.show()
###Output
_____no_output_____
###Markdown
ROC AUC
###Code
#ROC curve
def multiclass_roc_auc_score(x_test, y_test, model, average="micro"):
y_pred = model.predict(x_test, batch_size=batch_size)
# Convert to Binary classes
y_pred_bin = np.argmax(y_pred, axis=1)
y_test_bin = np.argmax(y_test, axis=1)
fpr, tpr, thresholds = roc_curve(y_test_bin, y_pred_bin)
auc_keras = auc(fpr, tpr)
print('AUC: {}'.format(auc_keras))
print('Log Loss: {}'.format(metrics.log_loss(y_test.argmax(axis=1), y_pred)))
plt.plot(fpr, tpr, color='darkorange', label='ROC curve (area = %0.2f)' % auc_keras)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.rcParams['font.size'] = 12
plt.title('ROC curve for our model')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc='best', fancybox=True)
plt.grid(True)
return plt.show()
###Output
_____no_output_____
###Markdown
Model Loss
###Code
#model loss
def plot_learning_curves(r):
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.plot(r.history['loss'])
plt.plot(r.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.grid(True)
plt.subplot(1,2,2)
plt.plot(r.history['accuracy'])
plt.plot(r.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.grid(True)
plt.tight_layout()
return plt.show()
###Output
_____no_output_____
###Markdown
Assigning Parameters
###Code
#parameters
data_dir = '/'
version = '-v5-'
num_classes = 2
ig_size= 64
epochs = 100
batch_size = 32
models_loss = 'binary_crossentropy'
models_opt = 'adam' #Adam(lr=0.001, decay=0.001/600) #SGD(learning_rate=0.001, momentum=0.9) #ADAM(lr=0.001)
###Output
_____no_output_____
###Markdown
Modeling - Base CNN ModelFirst, I used the sequential keras model which is th easiest way to build a model in keras. It allows one to build a model, layer by layer. My first layer will be a Conv2D layer that will deal with my input x-ray images of shape 64, 64, 3 using 64 nodes. The 3 in the shape sigifying that the images are in RGB. The kernel size is set to 3, which means the filter that would be used to scan through every pixel will be a matrix of 3x3. Next, a MaxPooling2D layer was utilized in order to reduce overfitting and dimentionality i.e in order to reduce the number of parameters to learn from and the amount of comuputation performed in the network. I set it to divide the dimentions by a matrix of 2x2 while moving 2 strides after choosing the maximum value of each patch. After this, I flattened the output from the MaxPooling layer in order to make it linear and pass it to the Dense layer implemented, which uses a node of 2 which represents the number of output classes to be predicted.
###Code
# calling thesequential model
m = Sequential(Conv2D(ig_size, kernel_size=3, activation='relu',
input_shape=(64,64,3)))
model = build_model(m, num_classes)
for layer in m.layers:
layer.trainable = False
#data augmentation
train_generator, validation_generator = process_data(x_train,y_train,
x_test,y_test, batch_size)
model.summary()
checkpoint = ModelCheckpoint(data_dir+'sequentialpnd' + '.h5',
monitor='val_loss',
mode="min",
save_best_only=True,
verbose=1)
earlystop = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=1,
restore_best_weights=True)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=30,
verbose=1,
factor=0.8,
min_lr=0.0001,
mode="auto",
min_delta=0.0001,
cooldown=5)
callbacks = [checkpoint, earlystop, learning_rate_reduction]
model.compile(loss=models_loss,
optimizer=models_opt,
metrics=['accuracy'])
history = model.fit(train_generator,
epochs=epochs,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=len(x_test) / batch_size,
steps_per_epoch=len(x_train) / batch_size)
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_input (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 62, 62, 64) 1792
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 31, 31, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 61504) 0
_________________________________________________________________
dense (Dense) (None, 2) 123010
=================================================================
Total params: 124,802
Trainable params: 123,010
Non-trainable params: 1,792
_________________________________________________________________
Epoch 1/100
56/55 [==============================] - ETA: 0s - loss: 0.4357 - accuracy: 0.8181
Epoch 00001: val_loss improved from inf to 0.31784, saving model to /sequentialpnd.h5
56/55 [==============================] - 9s 155ms/step - loss: 0.4357 - accuracy: 0.8181 - val_loss: 0.3178 - val_accuracy: 0.8668 - lr: 0.0010
Epoch 2/100
56/55 [==============================] - ETA: 0s - loss: 0.2440 - accuracy: 0.9096
Epoch 00002: val_loss improved from 0.31784 to 0.19426, saving model to /sequentialpnd.h5
56/55 [==============================] - 7s 123ms/step - loss: 0.2440 - accuracy: 0.9096 - val_loss: 0.1943 - val_accuracy: 0.9278 - lr: 0.0010
Epoch 3/100
56/55 [==============================] - ETA: 0s - loss: 0.2174 - accuracy: 0.9288
Epoch 00003: val_loss improved from 0.19426 to 0.15264, saving model to /sequentialpnd.h5
56/55 [==============================] - 6s 103ms/step - loss: 0.2174 - accuracy: 0.9288 - val_loss: 0.1526 - val_accuracy: 0.9481 - lr: 0.0010
Epoch 4/100
56/55 [==============================] - ETA: 0s - loss: 0.2118 - accuracy: 0.9271 ETA: 0s - loss: 0.2128 - accuracy: 0.
Epoch 00004: val_loss improved from 0.15264 to 0.14179, saving model to /sequentialpnd.h5
56/55 [==============================] - 6s 111ms/step - loss: 0.2118 - accuracy: 0.9271 - val_loss: 0.1418 - val_accuracy: 0.9503 - lr: 0.0010
Epoch 5/100
56/55 [==============================] - ETA: 0s - loss: 0.2253 - accuracy: 0.9209
Epoch 00005: val_loss did not improve from 0.14179
56/55 [==============================] - 5s 87ms/step - loss: 0.2253 - accuracy: 0.9209 - val_loss: 0.1761 - val_accuracy: 0.9345 - lr: 0.0010
Epoch 6/100
56/55 [==============================] - ETA: 0s - loss: 0.2104 - accuracy: 0.9243
Epoch 00006: val_loss improved from 0.14179 to 0.13128, saving model to /sequentialpnd.h5
56/55 [==============================] - 5s 92ms/step - loss: 0.2104 - accuracy: 0.9243 - val_loss: 0.1313 - val_accuracy: 0.9571 - lr: 0.0010
Epoch 7/100
56/55 [==============================] - ETA: 0s - loss: 0.1865 - accuracy: 0.9362
Epoch 00007: val_loss improved from 0.13128 to 0.09596, saving model to /sequentialpnd.h5
56/55 [==============================] - 5s 94ms/step - loss: 0.1865 - accuracy: 0.9362 - val_loss: 0.0960 - val_accuracy: 0.9639 - lr: 0.0010
Epoch 8/100
56/55 [==============================] - ETA: 0s - loss: 0.2091 - accuracy: 0.9220
Epoch 00008: val_loss did not improve from 0.09596
56/55 [==============================] - 5s 86ms/step - loss: 0.2091 - accuracy: 0.9220 - val_loss: 0.1074 - val_accuracy: 0.9639 - lr: 0.0010
Epoch 9/100
56/55 [==============================] - ETA: 0s - loss: 0.1760 - accuracy: 0.9350
Epoch 00009: val_loss did not improve from 0.09596
56/55 [==============================] - 6s 100ms/step - loss: 0.1760 - accuracy: 0.9350 - val_loss: 0.1473 - val_accuracy: 0.9503 - lr: 0.0010
Epoch 10/100
56/55 [==============================] - ETA: 0s - loss: 0.1979 - accuracy: 0.9345
Epoch 00010: val_loss improved from 0.09596 to 0.09018, saving model to /sequentialpnd.h5
56/55 [==============================] - 5s 90ms/step - loss: 0.1979 - accuracy: 0.9345 - val_loss: 0.0902 - val_accuracy: 0.9639 - lr: 0.0010
Epoch 11/100
56/55 [==============================] - ETA: 0s - loss: 0.1820 - accuracy: 0.9345
Epoch 00011: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 88ms/step - loss: 0.1820 - accuracy: 0.9345 - val_loss: 0.1316 - val_accuracy: 0.9549 - lr: 0.0010
Epoch 12/100
56/55 [==============================] - ETA: 0s - loss: 0.2237 - accuracy: 0.9215
Epoch 00012: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 87ms/step - loss: 0.2237 - accuracy: 0.9215 - val_loss: 0.1839 - val_accuracy: 0.9436 - lr: 0.0010
Epoch 13/100
56/55 [==============================] - ETA: 0s - loss: 0.2095 - accuracy: 0.9254
Epoch 00013: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 85ms/step - loss: 0.2095 - accuracy: 0.9254 - val_loss: 0.1865 - val_accuracy: 0.9436 - lr: 0.0010
Epoch 14/100
56/55 [==============================] - ETA: 0s - loss: 0.2069 - accuracy: 0.9288
Epoch 00014: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 85ms/step - loss: 0.2069 - accuracy: 0.9288 - val_loss: 0.2235 - val_accuracy: 0.9345 - lr: 0.0010
Epoch 15/100
56/55 [==============================] - ETA: 0s - loss: 0.1793 - accuracy: 0.9395
Epoch 00015: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 85ms/step - loss: 0.1793 - accuracy: 0.9395 - val_loss: 0.1901 - val_accuracy: 0.9391 - lr: 0.0010
Epoch 16/100
56/55 [==============================] - ETA: 0s - loss: 0.1673 - accuracy: 0.9424
Epoch 00016: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 84ms/step - loss: 0.1673 - accuracy: 0.9424 - val_loss: 0.1236 - val_accuracy: 0.9639 - lr: 0.0010
Epoch 17/100
56/55 [==============================] - ETA: 0s - loss: 0.1743 - accuracy: 0.9390
Epoch 00017: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 84ms/step - loss: 0.1743 - accuracy: 0.9390 - val_loss: 0.1153 - val_accuracy: 0.9639 - lr: 0.0010
Epoch 18/100
56/55 [==============================] - ETA: 0s - loss: 0.1728 - accuracy: 0.9401
Epoch 00018: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 84ms/step - loss: 0.1728 - accuracy: 0.9401 - val_loss: 0.2168 - val_accuracy: 0.9323 - lr: 0.0010
Epoch 19/100
56/55 [==============================] - ETA: 0s - loss: 0.1810 - accuracy: 0.9345
Epoch 00019: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 85ms/step - loss: 0.1810 - accuracy: 0.9345 - val_loss: 0.2036 - val_accuracy: 0.9345 - lr: 0.0010
Epoch 20/100
56/55 [==============================] - ETA: 0s - loss: 0.1790 - accuracy: 0.9412
Epoch 00020: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 86ms/step - loss: 0.1790 - accuracy: 0.9412 - val_loss: 0.1702 - val_accuracy: 0.9436 - lr: 0.0010
Epoch 21/100
55/55 [============================>.] - ETA: 0s - loss: 0.1668 - accuracy: 0.9472
Epoch 00021: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 86ms/step - loss: 0.1660 - accuracy: 0.9475 - val_loss: 0.1579 - val_accuracy: 0.9436 - lr: 0.0010
Epoch 22/100
56/55 [==============================] - ETA: 0s - loss: 0.2020 - accuracy: 0.9316
Epoch 00022: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 83ms/step - loss: 0.2020 - accuracy: 0.9316 - val_loss: 0.1789 - val_accuracy: 0.9368 - lr: 0.0010
Epoch 23/100
56/55 [==============================] - ETA: 0s - loss: 0.1633 - accuracy: 0.9458
Epoch 00023: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 82ms/step - loss: 0.1633 - accuracy: 0.9458 - val_loss: 0.1218 - val_accuracy: 0.9616 - lr: 0.0010
Epoch 24/100
56/55 [==============================] - ETA: 0s - loss: 0.1654 - accuracy: 0.9486
Epoch 00024: val_loss did not improve from 0.09018
56/55 [==============================] - 5s 84ms/step - loss: 0.1654 - accuracy: 0.9486 - val_loss: 0.1342 - val_accuracy: 0.9594 - lr: 0.0010
###Markdown
Testing
###Code
## Confusion matrix
plot_confusion_matrix(model, x_test, y_test)
## Learning curve
plot_learning_curves(history)
## ROC AUC
multiclass_roc_auc_score(x_test, y_test, model)
# saving model
model.save("sequential.h5")
from keras.models import load_model
model = load_model('./sequential.h5')
###Output
_____no_output_____
###Markdown
Modeling - Using VGG19VGG-19 is a transfer learning alorithm which means its an algorithm with 16 convolutional layers that focuses on storing knowledge that can be applied to different but relaed problems.
###Code
# assiging the VGG16 model
m = VGG19(weights="imagenet", include_top=False,
input_tensor=Input(shape=(64, 64, 3)))
model = addTopModel(m, num_classes)
for layer in m.layers:
layer.trainable = False
#data augmentation
train_generator, validation_generator = process_data(x_train,y_train,
x_test,y_test, batch_size)
model.summary()
checkpoint = ModelCheckpoint(data_dir+'modelcnd' + '.h5',
monitor='val_loss',
mode="min",
save_best_only=True,
verbose=1)
earlystop = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=1,
restore_best_weights=True)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_loss',
patience=30,
verbose=1,
factor=0.8,
min_lr=0.0001,
mode="auto",
min_delta=0.0001,
cooldown=5)
callbacks = [checkpoint, earlystop, learning_rate_reduction]
model.compile(loss=models_loss,
optimizer=models_opt,
metrics=['accuracy'])
history = model.fit(train_generator,
epochs=epochs,
callbacks=callbacks,
validation_data=validation_generator,
validation_steps=len(x_test) / batch_size,
steps_per_epoch=len(x_train) / batch_size)
###Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 64, 64, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 64, 64, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 32, 32, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 32, 32, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 32, 32, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 16, 16, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 16, 16, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 16, 16, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 16, 16, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 16, 16, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 8, 8, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 8, 8, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 8, 8, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 8, 8, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 8, 8, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 4, 4, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 2, 2, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 1049088
_________________________________________________________________
dense_2 (Dense) (None, 2) 1026
=================================================================
Total params: 21,074,498
Trainable params: 1,050,114
Non-trainable params: 20,024,384
_________________________________________________________________
Epoch 1/100
56/55 [==============================] - ETA: 0s - loss: 0.2208 - accuracy: 0.9113
Epoch 00001: val_loss improved from inf to 0.10050, saving model to /modelcnd.h5
56/55 [==============================] - 71s 1s/step - loss: 0.2208 - accuracy: 0.9113 - val_loss: 0.1005 - val_accuracy: 0.9549 - lr: 0.0010
Epoch 2/100
56/55 [==============================] - ETA: 0s - loss: 0.1331 - accuracy: 0.9463
Epoch 00002: val_loss improved from 0.10050 to 0.07233, saving model to /modelcnd.h5
56/55 [==============================] - 82s 1s/step - loss: 0.1331 - accuracy: 0.9463 - val_loss: 0.0723 - val_accuracy: 0.9729 - lr: 0.0010
Epoch 3/100
56/55 [==============================] - ETA: 0s - loss: 0.0985 - accuracy: 0.9605
Epoch 00003: val_loss improved from 0.07233 to 0.05439, saving model to /modelcnd.h5
56/55 [==============================] - 74s 1s/step - loss: 0.0985 - accuracy: 0.9605 - val_loss: 0.0544 - val_accuracy: 0.9865 - lr: 0.0010
Epoch 4/100
56/55 [==============================] - ETA: 0s - loss: 0.0814 - accuracy: 0.9712
Epoch 00004: val_loss improved from 0.05439 to 0.05357, saving model to /modelcnd.h5
56/55 [==============================] - 78s 1s/step - loss: 0.0814 - accuracy: 0.9712 - val_loss: 0.0536 - val_accuracy: 0.9865 - lr: 0.0010
Epoch 5/100
56/55 [==============================] - ETA: 0s - loss: 0.0871 - accuracy: 0.9661
Epoch 00005: val_loss did not improve from 0.05357
56/55 [==============================] - 70s 1s/step - loss: 0.0871 - accuracy: 0.9661 - val_loss: 0.0620 - val_accuracy: 0.9797 - lr: 0.0010
Epoch 6/100
56/55 [==============================] - ETA: 0s - loss: 0.0620 - accuracy: 0.9802
Epoch 00006: val_loss did not improve from 0.05357
56/55 [==============================] - 69s 1s/step - loss: 0.0620 - accuracy: 0.9802 - val_loss: 0.0908 - val_accuracy: 0.9661 - lr: 0.0010
Epoch 7/100
56/55 [==============================] - ETA: 0s - loss: 0.0789 - accuracy: 0.9689
Epoch 00007: val_loss improved from 0.05357 to 0.04287, saving model to /modelcnd.h5
56/55 [==============================] - 72s 1s/step - loss: 0.0789 - accuracy: 0.9689 - val_loss: 0.0429 - val_accuracy: 0.9797 - lr: 0.0010
Epoch 8/100
56/55 [==============================] - ETA: 0s - loss: 0.0739 - accuracy: 0.9729
Epoch 00008: val_loss did not improve from 0.04287
56/55 [==============================] - 72s 1s/step - loss: 0.0739 - accuracy: 0.9729 - val_loss: 0.0498 - val_accuracy: 0.9819 - lr: 0.0010
Epoch 9/100
56/55 [==============================] - ETA: 0s - loss: 0.0833 - accuracy: 0.9672
Epoch 00009: val_loss did not improve from 0.04287
56/55 [==============================] - 82s 1s/step - loss: 0.0833 - accuracy: 0.9672 - val_loss: 0.0467 - val_accuracy: 0.9842 - lr: 0.0010
Epoch 10/100
56/55 [==============================] - ETA: 0s - loss: 0.0737 - accuracy: 0.9718
Epoch 00010: val_loss did not improve from 0.04287
56/55 [==============================] - 76s 1s/step - loss: 0.0737 - accuracy: 0.9718 - val_loss: 0.0523 - val_accuracy: 0.9842 - lr: 0.0010
Epoch 11/100
56/55 [==============================] - ETA: 0s - loss: 0.0754 - accuracy: 0.9729
Epoch 00011: val_loss improved from 0.04287 to 0.03879, saving model to /modelcnd.h5
56/55 [==============================] - 78s 1s/step - loss: 0.0754 - accuracy: 0.9729 - val_loss: 0.0388 - val_accuracy: 0.9774 - lr: 0.0010
Epoch 12/100
56/55 [==============================] - ETA: 0s - loss: 0.0651 - accuracy: 0.9763
Epoch 00012: val_loss did not improve from 0.03879
56/55 [==============================] - 74s 1s/step - loss: 0.0651 - accuracy: 0.9763 - val_loss: 0.0953 - val_accuracy: 0.9684 - lr: 0.0010
Epoch 13/100
56/55 [==============================] - ETA: 0s - loss: 0.0840 - accuracy: 0.9689
Epoch 00013: val_loss did not improve from 0.03879
56/55 [==============================] - 72s 1s/step - loss: 0.0840 - accuracy: 0.9689 - val_loss: 0.0791 - val_accuracy: 0.9684 - lr: 0.0010
Epoch 14/100
56/55 [==============================] - ETA: 0s - loss: 0.0676 - accuracy: 0.9757
Epoch 00014: val_loss did not improve from 0.03879
56/55 [==============================] - 70s 1s/step - loss: 0.0676 - accuracy: 0.9757 - val_loss: 0.0503 - val_accuracy: 0.9842 - lr: 0.0010
Epoch 15/100
56/55 [==============================] - ETA: 0s - loss: 0.0532 - accuracy: 0.9836
Epoch 00015: val_loss did not improve from 0.03879
56/55 [==============================] - 69s 1s/step - loss: 0.0532 - accuracy: 0.9836 - val_loss: 0.0619 - val_accuracy: 0.9797 - lr: 0.0010
###Markdown
Testing
###Code
## Confusion matrix
plot_confusion_matrix(model, x_test, y_test)
## Learning curve
plot_learning_curves(history)
## ROC AUC
multiclass_roc_auc_score(x_test, y_test, model)
# saving model
model.save("modelcnd.h5")
###Output
_____no_output_____ |
examples/tutorial/opacity_exomol.ipynb | ###Markdown
Computing CO cross section using ExoMol This tutorial demonstrates how to compute the opacity of CO using ExoMol step by step.
###Code
from exojax.spec import xsection
from exojax.spec import SijT, doppler_sigma, gamma_natural
from exojax.spec.exomol import gamma_exomol
from exojax.spec import moldb
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('bmh')
###Output
/home/kawahara/anaconda3/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
First of all, set a wavenumber bin in the unit of wavenumber (cm-1).Here we set the wavenumber range as $1000 \le \nu \le 10000$ (1/cm) with the resolution of 0.01 (1/cm). We call moldb instance with the path of exomole files.
###Code
# Setting wavenumber bins and loading HITRAN database
nus=np.linspace(1000.0,10000.0,900000,dtype=np.float64) #cm-1
emf='/home/kawahara/exojax/data/CO/12C-16O/Li2015'
mdbCO=moldb.MdbExomol(emf,nus)
###Output
Background atmosphere: H2
Reading transition file
Broadening code level= a0
default broadening parameters are used for 71 J lower states in 152 states
###Markdown
Define molecular weight of CO ($\sim 12+16=28$), temperature (K), and pressure (bar).Also, we here assume the 100 % CO atmosphere, i.e. the partial pressure = pressure.
###Code
Mmol=28.010446441149536 # molecular weight
Tfix=1000.0 # we assume T=1000K
Pfix=1.e-3 # we compute P=1.e-3 bar
###Output
_____no_output_____
###Markdown
partition function ratio $q(T)$ is defined by $q(T) = Q(T)/Q(T_{ref})$; $T_{ref}$=296 KHere, we use the partition function from the interpolation of partition function
###Code
qt=mdbCO.qr_interp(Tfix)
###Output
_____no_output_____
###Markdown
Let us compute the line strength S(T) at temperature of Tfix.$S (T;s_0,\nu_0,E_l,q(T)) = S_0 \frac{Q(T_{ref})}{Q(T)} \frac{e^{- h c E_l /k_B T}}{e^{- h c E_l /k_B T_{ref}}} \frac{1- e^{- h c \nu /k_B T}}{1-e^{- h c \nu /k_B T_{ref}}}= q_r(T)^{-1} e^{ s_0 - c_2 E_l (T^{-1} - T_{ref}^{-1})} \frac{1- e^{- c_2 \nu_0/ T}}{1-e^{- c_2 \nu_0/T_{ref}}}$$s_0=\log_{e} S_0$ : logsij0$\nu_0$: nu_lines$E_l$ : elowerWhy the input is $s_0 = \log_{e} S_0$ instead of $S_0$ in SijT? This is because the direct value of $S_0$ is quite small and we need to use float32 for jax.
###Code
Sij=SijT(Tfix,mdbCO.logsij0,mdbCO.nu_lines,mdbCO.elower,qt)
###Output
_____no_output_____
###Markdown
Then, compute the Lorentz gamma factor (pressure+natural broadening)$\gamma_L = \gamma^p_L + \gamma^n_L$where the pressure broadning $\gamma^p_L = \alpha_{ref} ( T/T_{ref} )^{-n_{texp}} ( P/P_{ref}), $and the natural broadening$\gamma^n_L = \frac{A}{4 \pi c}$
###Code
gammaL = gamma_exomol(Pfix,Tfix,mdbCO.n_Texp,mdbCO.alpha_ref)\
+ gamma_natural(mdbCO.A)
gamma_exomol(Pfix,Tfix,mdbCO.n_Texp,mdbCO.alpha_ref)
fig=plt.figure()
fig.add_subplot(211)
plt.plot(mdbCO.jlower,mdbCO.n_Texp,".")
fig.add_subplot(212)
plt.plot(mdbCO.jlower,mdbCO.alpha_ref,".")
###Output
_____no_output_____
###Markdown
Thermal broadening$\sigma_D^{t} = \sqrt{\frac{k_B T}{M m_u}} \frac{\nu_0}{c}$
###Code
# thermal doppler sigma
sigmaD=doppler_sigma(mdbCO.nu_lines,Tfix,Mmol)
###Output
_____no_output_____
###Markdown
Then, the line center...
###Code
#line center
nu0=mdbCO.nu_lines
###Output
_____no_output_____
###Markdown
Although it depends on your GPU, you might need to devide the computation into multiple loops because of the limitation of the GPU memory. Here we assume 30MB for GPU memory (not exactly, memory size for numatrix).
###Code
xsv=xsection(nus,nu0,sigmaD,gammaL,Sij,memory_size=30) #use 30MB GPU MEMORY for numax
###Output
100%|██████████| 8257/8257 [01:37<00:00, 84.45it/s]
###Markdown
Plot it!
###Code
fig=plt.figure(figsize=(10,3))
ax=fig.add_subplot(111)
plt.plot(nus,xsv,lw=0.1,label="exojax")
plt.yscale("log")
plt.xlabel("wavenumber ($cm^{-1}$)")
plt.ylabel("cross section ($cm^{2}$)")
plt.legend(loc="upper left")
plt.savefig("co_exomol.pdf", bbox_inches="tight", pad_inches=0.0)
plt.show()
fig=plt.figure(figsize=(10,3))
ax=fig.add_subplot(111)
plt.plot(1.e8/nus,xsv,lw=1,label="exojax")
plt.yscale("log")
plt.xlabel("wavelength ($\AA$)")
plt.ylabel("cross section ($cm^{2}$)")
plt.xlim(22985.,23025)
plt.legend(loc="upper left")
plt.savefig("co_exomol.pdf", bbox_inches="tight", pad_inches=0.0)
plt.show()
###Output
_____no_output_____
###Markdown
Important Note Use float64 for wavenumber bin and line center.Below, we see the difference of opacity between float64 case and float 32.
###Code
xsv_32=xsection(np.float32(nus),np.float32(nu0),sigmaD,gammaL,Sij,memory_size=30)
fig=plt.figure(figsize=(10,6))
ax=fig.add_subplot(211)
plt.plot(1.e8/nus,xsv,".",lw=1,label="64",markersize=1)
plt.plot(1.e8/nus,xsv_32,".",lw=1,label="32",markersize=1)
plt.xlim(22985.,23025)
plt.yscale("log")
plt.ylabel("xsv $cm^{2}$")
ax=fig.add_subplot(212)
plt.plot(1.e8/nus,(xsv_32-xsv)/xsv,lw=1,label="difference")
plt.xlabel("wavelength ($\AA$)")
plt.ylabel("Difference")
plt.xlim(22985.,23025)
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Computing CO cross section using ExoMol This tutorial demonstrates how to compute the opacity of CO using ExoMol step by step.
###Code
from exojax.spec import xsection
from exojax.spec import SijT, doppler_sigma, gamma_natural
from exojax.spec.exomol import gamma_exomol
from exojax.spec import moldb
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('bmh')
###Output
/home/kawahara/anaconda3/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
First of all, set a wavenumber bin in the unit of wavenumber (cm-1).Here we set the wavenumber range as $1000 \le \nu \le 10000$ (1/cm) with the resolution of 0.01 (1/cm). We call moldb instance with the path of exomole files.
###Code
# Setting wavenumber bins and loading HITRAN database
nus=np.linspace(1000.0,10000.0,900000,dtype=np.float64) #cm-1
emf='/home/kawahara/exojax/data/CO/12C-16O/Li2015'
mdbCO=moldb.MdbExomol(emf,nus)
###Output
Background atmosphere: H2
Reading transition file
Broadening code level= a0
default broadening parameters are used for 71 J lower states in 152 states
###Markdown
Define molecular weight of CO ($\sim 12+16=28$), temperature (K), and pressure (bar).Also, we here assume the 100 % CO atmosphere, i.e. the partial pressure = pressure.
###Code
Mmol=28.010446441149536 # molecular weight
Tfix=1000.0 # we assume T=1000K
Pfix=1.e-3 # we compute P=1.e-3 bar
###Output
_____no_output_____
###Markdown
partition function ratio $q(T)$ is defined by $q(T) = Q(T)/Q(T_{ref})$; $T_{ref}$=296 KHere, we use the partition function from the interpolation of partition function
###Code
qt=mdbCO.qr_interp(Tfix)
###Output
_____no_output_____
###Markdown
Let us compute the line strength S(T) at temperature of Tfix.$S (T;s_0,\nu_0,E_l,q(T)) = S_0 \frac{Q(T_{ref})}{Q(T)} \frac{e^{- h c E_l /k_B T}}{e^{- h c E_l /k_B T_{ref}}} \frac{1- e^{- h c \nu /k_B T}}{1-e^{- h c \nu /k_B T_{ref}}}= q_r(T)^{-1} e^{ s_0 - c_2 E_l (T^{-1} - T_{ref}^{-1})} \frac{1- e^{- c_2 \nu_0/ T}}{1-e^{- c_2 \nu_0/T_{ref}}}$$s_0=\log_{e} S_0$ : logsij0$\nu_0$: nu_lines$E_l$ : elowerWhy the input is $s_0 = \log_{e} S_0$ instead of $S_0$ in SijT? This is because the direct value of $S_0$ is quite small and we need to use float32 for jax.
###Code
Sij=SijT(Tfix,mdbCO.logsij0,mdbCO.nu_lines,mdbCO.elower,qt)
###Output
_____no_output_____
###Markdown
Then, compute the Lorentz gamma factor (pressure+natural broadening)$\gamma_L = \gamma^p_L + \gamma^n_L$where the pressure broadning $\gamma^p_L = \alpha_{ref} ( T/T_{ref} )^{-n_{texp}} ( P/P_{ref}), $and the natural broadening$\gamma^n_L = \frac{A}{4 \pi c}$
###Code
gammaL = gamma_exomol(Pfix,Tfix,mdbCO.n_Texp,mdbCO.alpha_ref)\
+ gamma_natural(mdbCO.A)
gamma_exomol(Pfix,Tfix,mdbCO.n_Texp,mdbCO.alpha_ref)
fig=plt.figure()
fig.add_subplot(211)
plt.plot(mdbCO.jlower,mdbCO.n_Texp,".")
fig.add_subplot(212)
plt.plot(mdbCO.jlower,mdbCO.alpha_ref,".")
###Output
_____no_output_____
###Markdown
Thermal broadening$\sigma_D^{t} = \sqrt{\frac{k_B T}{M m_u}} \frac{\nu_0}{c}$
###Code
# thermal doppler sigma
sigmaD=doppler_sigma(mdbCO.nu_lines,Tfix,Mmol)
###Output
_____no_output_____
###Markdown
Then, the line center...
###Code
#line center
nu0=mdbCO.nu_lines
###Output
_____no_output_____
###Markdown
Although it depends on your GPU, you might need to devide the computation into multiple loops because of the limitation of the GPU memory. Here we assume 30MB for GPU memory (not exactly, memory size for numatrix).
###Code
xsv=xsection(nus,nu0,sigmaD,gammaL,Sij,memory_size=30) #use 30MB GPU MEMORY for numax
###Output
100%|██████████| 8257/8257 [01:37<00:00, 84.45it/s]
###Markdown
Plot it!
###Code
fig=plt.figure(figsize=(10,3))
ax=fig.add_subplot(111)
plt.plot(nus,xsv,lw=0.1,label="exojax")
plt.yscale("log")
plt.xlabel("wavenumber ($cm^{-1}$)")
plt.ylabel("cross section ($cm^{2}$)")
plt.legend(loc="upper left")
plt.savefig("co_exomol.pdf", bbox_inches="tight", pad_inches=0.0)
plt.show()
fig=plt.figure(figsize=(10,3))
ax=fig.add_subplot(111)
plt.plot(1.e8/nus,xsv,lw=1,label="exojax")
plt.yscale("log")
plt.xlabel("wavelength ($\AA$)")
plt.ylabel("cross section ($cm^{2}$)")
plt.xlim(22985.,23025)
plt.legend(loc="upper left")
plt.savefig("co_exomol.pdf", bbox_inches="tight", pad_inches=0.0)
plt.show()
###Output
_____no_output_____
###Markdown
Important Note Use float64 for wavenumber bin and line center.Below, we see the difference of opacity between float64 case and float 32.
###Code
xsv_32=xsection(np.float32(nus),np.float32(nu0),sigmaD,gammaL,Sij,memory_size=30)
fig=plt.figure(figsize=(10,6))
ax=fig.add_subplot(211)
plt.plot(1.e8/nus,xsv,".",lw=1,label="64",markersize=1)
plt.plot(1.e8/nus,xsv_32,".",lw=1,label="32",markersize=1)
plt.xlim(22985.,23025)
plt.yscale("log")
plt.ylabel("xsv $cm^{2}$")
ax=fig.add_subplot(212)
plt.plot(1.e8/nus,(xsv_32-xsv)/xsv,lw=1,label="difference")
plt.xlabel("wavelength ($\AA$)")
plt.ylabel("Difference")
plt.xlim(22985.,23025)
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____ |
autonormalize/demos/AutoNormalize + FeatureTools Demo.ipynb | ###Markdown
Using AutoNormalize with FeatureTools and Compose-ML In this demo we will use AutoNormalize to create our `entityset` from a mock dataset with a single table. We will use composeml and featuretools to generate labels and features, and then create a machine learning model for predicting one hour in advance whether customers will spend over $1200 within the next hour of transactions.
###Code
%matplotlib inline
from featuretools.autonormalize import autonormalize as an
import composeml as cp
import featuretools as ft
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
Load Data
###Code
transaction_df = ft.demo.load_mock_customer(n_customers=80, n_products=50, n_sessions=200,
n_transactions=10000, return_single_table=True)
transaction_df.head(3)
###Output
_____no_output_____
###Markdown
Generate Labels We create our labeling function, and use compose-ml to create our label maker. Using this, we extract our labels.
###Code
def total_spent(df_slice):
label = df_slice["amount"].sum()
return label
label_maker = cp.LabelMaker(target_entity='customer_id', time_index='transaction_time',
labeling_function=total_spent, window_size='1h')
labels = label_maker.search(transaction_df, minimum_data='2h', num_examples_per_instance=50, gap='2min')
labels.head(4)
###Output
Elapsed: 00:06 | Remaining: 00:00 | Progress: 100%|██████████████████| customer_id: 75/75
###Markdown
We then transform the labels, to have a threshold of $1200, and shift label times an hour earlier for predicting in advance.
###Code
labels = labels.threshold(1200)
labels = labels.apply_lead('1h')
labels.head(4)
labels.describe()
###Output
Label Distribution
------------------
True 1579
False 584
Total: 2163
Settings
--------
num_examples_per_instance 50
minimum_data 2h
window_size 1h
gap 2min
Transforms
----------
1. threshold
- value: 1200
2. apply_lead
- value: 1h
###Markdown
Create an `EntitySet` with AutoNormalize To create an `EntitySet` with AutoNormalize we just call `auto_entityset()` on our dataframe. AutoNormalize then automatically detects the dependencies within the data, and normalizes the dataframe accordingly. This gives us an automatically normalized entityset!
###Code
es = an.auto_entityset(transaction_df, accuracy=1, name="transactions", time_index='transaction_time')
es.add_last_time_indexes()
print(es)
###Output
100%|██████████| 10/10 [00:01<00:00, 7.11it/s]
###Markdown
It's really that simple. The plot shows you how the library normalized `transaction_df`.
###Code
es.plot()
###Output
_____no_output_____
###Markdown
Create Feature Matrix Now we generate features using `dfs()`.
###Code
feature_matrix, features_defs = ft.dfs(
entityset=es,
target_entity='customer_id',
cutoff_time=labels,
cutoff_time_in_index=True,
verbose=True,
)
features_defs[:20]
###Output
Built 73 features
Elapsed: 08:15 | Remaining: 00:00 | Progress: 100%|██████████| Calculated: 11/11 chunks
###Markdown
Machine Learning Now we preprocess our features, and split the features and corresponding labels into training and testing sets.
###Code
y = feature_matrix.pop(labels.name)
x = feature_matrix.fillna(0)
x, features_enc = ft.encode_features(x, features_defs)
x_train, x_test, y_train, y_test = train_test_split(
x,
y,
train_size=.8,
test_size=.2,
random_state=0,
)
###Output
_____no_output_____
###Markdown
Now, we train a random forest classifer on the training set, and then test the models performance by evaluating predictions on the testing set.
###Code
clf = RandomForestClassifier(n_estimators=10, random_state=0)
clf.fit(x_train, y_train)
y_hat = clf.predict(x_test)
print(classification_report(y_test, y_hat))
###Output
precision recall f1-score support
False 0.67 0.06 0.11 129
True 0.71 0.99 0.83 304
accuracy 0.71 433
macro avg 0.69 0.52 0.47 433
weighted avg 0.70 0.71 0.61 433
###Markdown
This plot is based on scores obtained by the model to illustrate which features are considered important for predictions.
###Code
feature_importances = zip(x_train.columns, clf.feature_importances_)
feature_importances = pd.Series(dict(feature_importances))
feature_importances = feature_importances.rename_axis('Features')
feature_importances = feature_importances.sort_values()
top_features = feature_importances.tail(40)
plot = top_features.plot(kind='barh', figsize=(5, 12), color='#054571')
plot.set_title('Feature Importances')
plot.set_xlabel('Scores');
###Output
_____no_output_____ |
test/Make_test_dataset.ipynb | ###Markdown
Make some genes
###Code
gene_1_list = make_gene_exon_intron_list([100,20,100,20,100])
gene_2_list = make_gene_exon_intron_list([50,20,50])
gene_1_transcript_1_sequence = gene_1_list[0] + gene_1_list[2]
gene_1_transcript_2_sequence = gene_1_list[2] + gene_1_list[4]
gene_2_transcript_1_sequence = gene_2_list[0] + gene_2_list[2]
###Output
_____no_output_____
###Markdown
We make a 200 bp genome sequence, with 20 bases of random nucleotides on either side of the gene. The sequence is also written as a fasta file.
###Code
genome_sequence = make_sequence(20) + "".join(gene_1_list) + make_sequence(20) + "".join(gene_2_list) + make_sequence(20)
SeqIO.write(SeqRecord.SeqRecord(seq = Seq.Seq(genome_sequence), id = 'test', description = ''), format='fasta', handle = './STAR_GENOME/test.fa')
###Output
_____no_output_____
###Markdown
Make a sample gtf and write it to file
###Code
import csv
list_for_gft_df = [['test', 'FOO', 'gene', 21, 360, '.', '+', '.', 'gene_id "ENSG_test1";']]
list_for_gft_df.append(['test', 'FOO', 'transcript', 21, 240, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_1";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 21, 120, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_1";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 141, 240, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_1";'])
list_for_gft_df.append(['test', 'FOO', 'transcript', 141, 360, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_2";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 141, 240, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_2";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 261, 360, '.', '+', '.', 'gene_id "ENSG_test1"; transcript_id "ENST_test1_2";'])
list_for_gft_df.append(['test', 'FOO', 'gene', 381, 500, '.', '+', '.', 'gene_id "ENSG_test2";'])
list_for_gft_df.append(['test', 'FOO', 'transcript', 381, 500, '.', '+', '.', 'gene_id "ENSG_test2"; transcript_id "ENST_test2_1";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 381, 430, '.', '+', '.', 'gene_id "ENSG_test2"; transcript_id "ENST_test2_1";'])
list_for_gft_df.append(['test', 'FOO', 'exon', 451, 500, '.', '+', '.', 'gene_id "ENSG_test2"; transcript_id "ENST_test2_1";'])
pd.DataFrame(list_for_gft_df).to_csv("STAR_GENOME/test.gtf", sep = "\t", header = False, index = False, quoting = csv.QUOTE_NONE)#quotechar = '')#, doublequote = False)
###Output
_____no_output_____
###Markdown
Make a STAR genome index
###Code
! STAR --runThreadN 1 --runMode genomeGenerate --genomeSAindexNbases 10 --genomeDir ./STAR_GENOME --genomeFastaFiles ./STAR_GENOME/test.fa --sjdbGTFfile ./STAR_GENOME/test.gtf --sjdbOverhang 35
def mutate_dna(dna_str, nm = 1):
inds_list = [random.randrange(0,len(dna_str)) for _i in 'a'*nm]
new_seq_list = list(dna_str)
for _ind in inds_list:
current_base = dna_str[_ind]
new_base = bases[random.randrange(0,4)]
while new_base == current_base:
new_base = bases[random.randrange(0,4)]
new_seq_list[_ind] = new_base
return ''.join(new_seq_list)
###Output
_____no_output_____
###Markdown
Make some reads and write them to fastq files
###Code
def make_paired_reads_and_write_to_fastq_file(cell_barcode, umi, transcript_seq, read1_fh, read2_fh, read_length = 70, bc_nm = 1, umi_nm = 1):
read_id = 'TESTREAD_' + ''.join([random.choice(string.ascii_uppercase + string.digits) for _ind in range(6)])
read_1_str = mutate_dna(cell_barcode, bc_nm) + mutate_dna(umi, umi_nm) + 'TTTTTTTTTT'
r1_seqrec = SeqRecord.SeqRecord(id = read_id, description = '', seq = Seq.Seq(read_1_str))
r1_seqrec.letter_annotations['phred_quality'] = [30 for i in range(len(read_1_str))]
SeqIO.write(r1_seqrec, format = 'fastq', handle = read1_fh)
start_ind = random.randint(0, len(transcript_seq) - read_length)
read_2_str = transcript_seq[start_ind:(start_ind + read_length)]
r2_seqrec = SeqRecord.SeqRecord(id = read_id, description = '', seq = Seq.Seq(read_2_str))
r2_seqrec.letter_annotations['phred_quality'] = [30 for i in range(read_length)]
SeqIO.write(r2_seqrec, format = 'fastq', handle = read2_fh)
cell_1_barcode = 'AGATCG'
cell_2_barcode = 'CGTAGA'
umi_1 = 'ATCCG'
umi_2 = 'TAGGT'
umi_3 = 'CCTAA'
###Output
_____no_output_____
###Markdown
The follwing counts should result: ENSG_test1 ENSG_test2 __no_feature __ambiguous __too_low_aQual __not_aligned __alignment_not_uniquetest_ROW10_R2_umi_labelledAligned 3 2 0 0 0 0 0test_ROW20_R2_umi_labelledAligned 2 3 0 0 0 0 0
###Code
r1_fh = open("./FASTQ/test_R1.fastq", 'w')
r2_fh = open("./FASTQ/test_R2.fastq", 'w')
#cell 1
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_1, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_1, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_1, gene_1_transcript_2_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_1, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_1, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_1_transcript_2_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_3, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_3, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_3, gene_1_transcript_2_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_3, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_1_barcode, umi_3, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
#cell 2
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_3, gene_1_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_1, gene_1_transcript_2_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_1, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_2, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
make_paired_reads_and_write_to_fastq_file(cell_2_barcode, umi_3, gene_2_transcript_1_sequence, r1_fh, r2_fh, umi_nm = 0)
r1_fh.close()
r2_fh.close()
import pysam
bam_obj = pysam.AlignmentFile("/home/rob/Dropbox/python_package_dev/fluidigm_800_chip_processor/test/test_out/test_Seq/test_ROW10_R2_umi_labelledAligned.sortedByCoord.out_tagged.bam", 'r')
for _read in bam_obj:
_read.set_tag("GN", ",".join(list(set(_read.get_tag('GN').split(',')))))
print _read
'foo,bar'.split(',')
###Output
_____no_output_____ |
doc/source/user_guide/Numba.ipynb | ###Markdown
(numba_for_arviz)= Numba - an overview [Numba](https://numba.pydata.org/numba-doc/latest/index.html) is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.ArviZ includes {ref}`Numba as an optional dependency ` and a number of functions have been included in `utils.py` for systems in which Numba is pre-installed. Additional functionality, {class}`arviz.Numba`, of disabling/re-enabling numba for systems that have Numba installed has also been included. A simple example to display the effectiveness of Numba
###Code
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
###Output
1.54 ms ± 383 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
That is almost 300 times faster!! Let's compare this to NumPy
###Code
%timeit np.var(data, ddof=1)
###Output
8.68 ms ± 435 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
In certain scenarios, Numba can even outperform NumPy! Numba wihin ArviZLet's see Numba's effect on a few of ArviZ functions
###Code
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
###Output
_____no_output_____
###Markdown
The methods of the {class}`~arviz.Numba` class can be used to enable or disable numba. The attribute `numba_flag` indicates whether numba is enabled within ArviZ or not.
###Code
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
###Output
3.97 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Numba - an overview Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.ArviZ includes Numba as an optional dependency and a number of functions have been included in **utils.py** for systems in which Numba is pre-installed. An additional functionality of disabling/re-enabling numba for systems which have numba installed has also been included. A simple example to display the effectiveness of Numba
###Code
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
###Output
1.54 ms ± 383 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
**That is almost 300 times faster!! Let's compare this to numpy**
###Code
%timeit np.var(data, ddof=1)
###Output
8.68 ms ± 435 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
**In certain scenarios, Numba outperforms numpy!** **Let's see Numba's effect on a few of ArviZ functions**
###Code
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
###Output
3.97 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Numba - an overview Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.ArviZ includes Numba as an optional dependency and a number of functions have been included in **utils.py** for systems in which Numba is pre-installed. An additional functionality of disabling/re-enabling numba for systems which have numba installed has also been included. A simple example to display the effectiveness of Numba
###Code
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats import geweke
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
###Output
1.54 ms ± 383 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
**That is almost 300 times faster!! Let's compare this to numpy**
###Code
%timeit np.var(data, ddof=1)
###Output
8.68 ms ± 435 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
**In certain scenarios, Numba outperforms numpy!** **Let's see Numba's effect on a few of ArviZ functions**
###Code
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
###Output
3.97 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
(numba_for_arviz)= Numba - an overview [Numba](https://numba.pydata.org/numba-doc/latest/index.html) is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.ArviZ includes {ref}`Numba as an optional dependency ` and a number of functions have been included in `utils.py` for systems in which Numba is pre-installed. Additional functionality, {class}`arviz.Numba`, of disabling/re-enabling numba for systems that have Numba installed has also been included. A simple example to display the effectiveness of Numba
###Code
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
###Output
1.54 ms ± 383 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
That is almost 300 times faster!! Let's compare this to NumPy
###Code
%timeit np.var(data, ddof=1)
###Output
8.68 ms ± 435 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
In certain scenarios, Numba can even outperform NumPy! Numba within ArviZLet's see Numba's effect on a few of ArviZ functions
###Code
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
###Output
_____no_output_____
###Markdown
The methods of the {class}`~arviz.Numba` class can be used to enable or disable numba. The attribute `numba_flag` indicates whether numba is enabled within ArviZ or not.
###Code
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
###Output
3.97 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Numba - an overview [Numba](https://numba.pydata.org/numba-doc/latest/index.html) is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.ArviZ includes {ref}`Numba as an optional dependency ` and a number of functions have been included in **utils.py** for systems in which Numba is pre-installed. Additional functionality of disabling/re-enabling numba for systems that have Numba installed has also been included. A simple example to display the effectiveness of Numba
###Code
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
###Output
1.54 ms ± 383 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
**That is almost 300 times faster!! Let's compare this to NumPy.**
###Code
%timeit np.var(data, ddof=1)
###Output
8.68 ms ± 435 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
**In certain scenarios, Numba outperforms NumPy!** **Let's see Numba's effect on a few of ArviZ functions.**
###Code
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
###Output
3.97 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
|
week5_eda/analyzing_model_perf.ipynb | ###Markdown
Approximately 80% of the data belongs to class 1. Therefore the default accuracy is about 80%. The aim here is to obtain an accuracy of 99 - 99.9%.The examples in the original dataset were in time order, and this time order could presumably be relevant in classification. However, this was not deemed relevant for StatLog purposes, so the order of the examples in the original dataset was randomised, and a portion of the original dataset removed for validation purposes.Attribute Information:The shuttle dataset contains 9 attributes all of which are numerical. The first one being time. The last column is the class which has been coded as follows :* 1 Rad Flow* 2 Fpv Close* 3 Fpv Open* 4 High* 5 Bypass* 6 Bpv Close* 7 Bpv Open 2. Load and prepare the datasetLoad the training data into a DataFrame named df_train_data.Create binary classification problem; rename some class labels.Create a DataFrame of nine features named X, drop column 9.Create a DataFrame of labels named y, select only column 9.Split the data into a training set and a test set.3. Create the modelInstantiate a logistic regression classifier with an lbfgs solver.Fit the classifier to the data.4. Calculate accuracyCalculate and print the accuracy of the model on the test data.5. Dummy classifierUse the dummy classifier to calculate the accuracy of a purely random chance.Compare this result to the result of the logistic regression classifier above. What does this result tell you?6. Confusion matrixPrint the confusion matrix.7. Plot a nicer confusion matrix (optional)Use the plot_confusion_matrix() function from above to plot a nicer-looking confusion matrix.8. Calculate metricsPrint the F₁, Fᵦ, precision, recall, and accuracy scores.9. Print a classification report10. Plot the ROC curve and AUCCalculate AUC and plot the curve.11. Plot precision-recall curvePlot the precision-recall curve for the model above.Find the best value for C in the logistic regression classifier for avoiding overfitting. Plot the training and testing accuracy over a range of C values from 0.05 to 1.5.12. Cross-validationPerform five-fold cross-validation for a logistic regression classifier. Print the five accuracy scores and the mean validation score.13. Is this really linear?Your linear classifier is not giving you better accuracy than the dummy classifier. Suppose that the data was not linearly separable. Instantiate and train a KNN model with k = 7. How does the accuracy of the KNN model compare to the logistic regression from above? What does that tell you about the data?14. Random forestNext, instantiate and fit a random forest classifier and calculate the accuracy of that model.Now, answer some additional questions about analyzing model performance.
###Code
colnames=['Time','A','B','C','D','E','F','G','H','target']
df_train_data = pd.read_csv('shuttle.tst.csv', names=colnames, header=None)
df_train_data.head()
# mapping = [1: 'Rad Flow',2: 'Fpv Close', 3: 'Fpv Open',4: 'High',5: 'Bypass',6: 'Bpv Close',7: 'Bpv Open']
# creating a binary label using values =1 at target
df_train_data['target_flow'] = df_train_data['target'] < 2
sns.countplot(x=df_train_data['target_flow'])
plt.show()
X = df_train_data.drop(columns=['target_flow','target'])
y = df_train_data['target_flow']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# creating lr model
lr = LogisticRegression(solver='lbfgs', random_state=4)
lr.fit(X, y)
test_score = lr.score(X_test, y_test)
train_score = lr.score(X_train, y_train)
print('accuracy score: %s' % lr.score(X_test, y_test))
print('# of iterations %s' % lr.n_iter_[0])
print('Score on training data: ', train_score)
print('Score on test data: ', test_score)
# model accuracy is high with no tuning at 94%
# comparing high accuracy of our model to a dummy model
dummy = DummyClassifier(strategy = 'most_frequent')
dummy.fit(X_train, y_train)
dummy.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
The dummy classifer randomly guessing correctly 80% of the time so the lr model of 94% is not so great.
###Code
predictions = lr.predict(X_test)
confusion = confusion_matrix(y_test, predictions, labels=[1, 0])
print(confusion)
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
"""
Given a scikit-learn confusion matrix (CM), make a nice plot.
Arguments
---------
cm: Confusion matrix from sklearn.metrics.confusion_matrix
target_names: Given classification classes, such as [0, 1, 2]
The class names, for example, ['high', 'medium', 'low']
title: The text to display at the top of the matrix
cmap: The gradient of the values displayed from matplotlib.pyplot.cm
See http://matplotlib.org/examples/color/colormaps_reference.html
`plt.get_cmap('jet')` or `plt.cm.Blues`
normalize: If `False`, plot the raw numbers
If `True`, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # Confusion matrix created by
# `sklearn.metrics.confusion_matrix`
normalize = True, # Show proportions
target_names = y_labels_vals, # List of names of the classes
title = best_estimator_name) # Title of graph
Citation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
plot_confusion_matrix(cm=confusion, target_names = ['Target_flow', 'Not target_flow'], title = 'Confusion Matrix',normalize=False)
accuracy = accuracy_score(y_test, predictions)
precision = precision_score(y_test, predictions)
recall = recall_score(y_test, predictions)
f1 = f1_score(y_test, predictions)
fbeta_precision = fbeta_score(y_test, predictions, 0.5)
fbeta_recall = fbeta_score(y_test, predictions, 2)
print('Accuracy score: {:.2f}'.format(accuracy))
print('Precision score: {:.2f}'.format(precision))
print('Recall score: {:.2f}'.format(recall))
print('F1 score: {:.2f}'.format(f1))
print('Fbeta score favoring precision: {:.2f}'.format(fbeta_precision))
print('FBeta score favoring recall: {:.2f}'.format(fbeta_recall))
report = classification_report(y_test, predictions, target_names=['Target_flow', 'Not target_flow'])
print(report)
probs = lr.predict_proba(X_test)[:, 1]
print(probs[1:30])
# plotting the decision threshold in the model occuring at 0.5
pos = [i for i, j in zip(probs, y_test) if j == 1]
neg = [i for i, j in zip(probs, y_test) if j == 0]
with plt.xkcd():
fig = plt.figure(figsize=(8, 4))
sns.distplot(pos, hist = False, kde = True, color='g',
kde_kws = {'shade': True, 'linewidth': 3})
sns.distplot(neg, hist = False, kde = True, color='r',
kde_kws = {'shade': True, 'linewidth': 3})
plt.plot([0.5, 0.5], [0, 25], '-b')
plt.annotate(
'The probability threshold\npositive to the right\nnegative to the left',
xy=(0.51, 15), arrowprops=dict(arrowstyle='->'), xytext=(0.6, 20))
plt.show()
fpr, tpr, thresholds = roc_curve(y_test, probs)
print(fpr[1:30])
print(tpr[1:30])
print(thresholds[1:30])
fig = plt.figure(figsize = (6, 6))
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve for Logistic Regression Model')
plt.show()
pres, rec, thresholds = precision_recall_curve(y_test, predictions)
fig = plt.figure(figsize = (6, 6))
plt.plot(rec, pres)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.show()
# looking at effects on model by L2 regularization
lr_regularized = LogisticRegression(solver='lbfgs', penalty='l2', max_iter=200, random_state=2)
lr_regularized.fit(X_train, y_train)
test_score = lr_regularized.score(X_test, y_test)
train_score = lr_regularized.score(X_train, y_train)
print('Score on training data: ', train_score)
print('Score on test data: ', test_score)
# using and L2 and 100 iterations as parameters, accuracy has increased and underfitting has improved. The range of coefficients is reduced
# from -1.5 to 2.5 without L2 to between -1 to 1.5 or from a about a 4 point spread to a 2.5 point spread.
fig = plt.figure(figsize=(8, 8))
grid = plt.GridSpec(2, 2, hspace=0.5, wspace=0.5)
x = np.arange(0, len(lr.coef_[0]),1)
y = lr.coef_[0]
ax1 = fig.add_subplot(grid[0, 0])
ax1.plot(x, y, '-g')
ax1.set(xlabel='Features', ylabel='Coefficients')
ax1.set_title('No Regularization')
y_reg = lr_regularized.coef_[0]
ax2 = fig.add_subplot(grid[0, 1])
ax2.plot(x, y_reg, '-r')
ax2.set(xlabel='Features', ylabel='Coefficients')
ax2.set_title('L2 Regularization')
ax3 = fig.add_subplot(grid[1, 0:])
ax3.plot(x, y, '-g')
ax3.plot(x, y_reg, '-r')
ax3.set(xlabel='Features', ylabel='Coefficients')
ax3.set_title('Both on same chart for comparison')
plt.show()
# looking at effect of various values of C on the model
c_vals = np.arange(0.05, 1.5, 0.05)
test_accuracy = []
train_accuracy = []
for c in c_vals:
lr = LogisticRegression(solver='lbfgs', penalty='l2', C=c, max_iter=200, random_state=2)
lr.fit(X_train, y_train)
test_accuracy.append(lr.score(X_test, y_test))
train_accuracy.append(lr.score(X_train, y_train))
fig = plt.figure(figsize=(8, 4))
ax1 = fig.add_subplot(1, 1, 1)
ax1.plot(c_vals, test_accuracy, '-g', label='Test Accuracy')
ax1.plot(c_vals, train_accuracy, '-b', label='Train Accuracy')
ax1.set(xlabel='C', ylabel='Accuracy')
ax1.set_title('Effect of C on Accuracy')
ax1.legend()
plt.show()
# The minimum value of C occurs somewhere at about C = 0.7.
df_train_data.shape
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = LogisticRegression(solver='lbfgs')
cv_scores = cross_val_score(clf, X_train, y_train, cv = 5)
print('Accuracy scores for the five folds: ', cv_scores)
print('Mean cross-validation score: {:.3f}'.format(np.mean(cv_scores)))
###Output
Accuracy scores for the five folds: [0.93232759 0.94008621 0.93965517 0.92715517 0.93232759]
Mean cross-validation score: 0.934
|
rossmann_timeSeries.ipynb | ###Markdown
###Code
# Last amended: 19th March, 2021
# Myfolder: github/deeplearning-sequences
# Objectives:
# i) Feature engineering in Time Series data
# ii) Using fastai on tabular data
#
# https://colab.research.google.com/github/duchaba2/fastai_san_ramon_biztech/blob/master/smbt_rossman_data_clean.ipynb#scrollTo=UImWYEGiaFUS
# fastai.core: https://docs.fast.ai/tabular.core.html
# https://www.kaggle.com/hortonhearsafoo/fast-ai-lesson-3
# https://github.com/duchaba2/fastai_san_ramon_biztech
# Using fastai on tabular data
# https://github.com/fastai/fastbook/blob/master/09_tabular.ipynb
# Rossmann Data Engineering
# Much of it is not implemented
# See: https://www.kaggle.com/omgrodas/rossmann-data-engineering
# Last amended: 17th March, 2021
# My folder:
# Objectives:
# i) Predicting sales in Rossmann Store Sales
# ii) Feature generation in TimeSeries data
###Output
_____no_output_____
###Markdown
The problemRossmann operates over 3,000 drug stores in 7 European countries. Currently, Rossmann store managers are tasked with predicting their daily sales for up to six weeks in advance. Store sales are influenced by many factors, including promotions, competition, school and state holidays, seasonality, and locality. With thousands of individual managers predicting sales based on their unique circumstances, the accuracy of results can be quite varied. Field descriptionsMost of the fields are self-explanatory. The following are descriptions for those that aren't. > **Id** - an Id that represents a (Store, Date) duple within the test set> **Store** - a unique Id for each store> **Sales** - the turnover for any given day (this is what you are predicting)> **Customers** - the number of customers on a given day> **Open** - an indicator for whether the store was open: 0 = closed, 1 = open> **StateHoliday** - indicates a state holiday. Normally all stores, with few exceptions, are closed on state holidays. Note that all schools are closed on public holidays and weekends. a = public holiday, b = Easter holiday, c = Christmas, 0 = None> **SchoolHoliday** - indicates if the (Store, Date) was affected by the closure of public schools> **StoreType** - differentiates between 4 different store models: a, b, c, d> **Assortment** - describes an assortment level: a = basic, b = extra, c = extended> **CompetitionDistance** - distance in meters to the nearest competitor store> **CompetitionOpenSince**[Month/Year] - gives the approximate year and month of the time the nearest competitor was opened> **Promo** - indicates whether a store is running a promo on that day> **Promo2** - Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating> **Promo2Since**[Year/Week] - describes the year and calendar week when the store started participating in Promo2> **PromoInterval** - describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store Libraries and data files
###Code
#import pytorch and AI
#!pip install --upgrade git+https://github.com/fastai/fastai.git
#(Optional) double check your import pytorch and fast.ai version
# fastai related
# The 'tabular' module has
# all the functions as on this page:
# https://docs.fast.ai/tabular.core.html
from fastai.tabular import *
import fastai
from fastai.tabular import *
from fastai.basics import *
import fastai.utils
#from fastai.data import *
# 1.0 Connect to your google drive
# Transfer rossmann files from
# gdrive to colab VM
from google.colab import drive
drive.mount('/content/drive')
# 1.1 Copy files from source to Colab
source="/content/drive/MyDrive/Colab_data_files/rossmannStoreSales"
dest="/content"
# 1.1.1 Remove existing folders
! rm -rf $dest/rossmannStoreSales
# 1.1.2 Copy files from gdrive to colab VM
! cp -r $source $dest/rossmannStoreSales
# 1.1.3 Check
! ls -la $dest/rossmannStoreSales
# 1.2 Untar tgz file
! tar -xvzf $dest/rossmannStoreSales/rossmann.tgz -C $dest/rossmannStoreSales/
# 1.2.1 And check if all files are there
! ls -la $dest/rossmannStoreSales
# 1.3 Call libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
# 1.4 Display output of multiple commands from a cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# 1.5 Path to our files
path = "/content/rossmannStoreSales"
os.chdir(path)
os.listdir()
###Output
_____no_output_____
###Markdown
CAUTION--Clean-up from gdrive--CAUTIONLet us clean up from gdrive last saved processed files of this project, if they exist.
###Code
# 1.6 Our path
path = "/content/drive/MyDrive/Colab_data_files/"
# 1.6.1
if os.path.exists(path + "joined"):
os.remove(path + "joined")
# 1.6.2
if os.path.exists(path + "joined_p"):
os.remove(path + "joined_p")
# 1.6.3
if os.path.exists(path + "joined_fp"):
os.remove(path+"joined_fp")
# 1.6.4
if os.path.exists(path + "joined_ffp"):
os.remove(path+"joined_ffp")
# 1.6.5
if os.path.exists(path + "joined_fpg"):
os.remove(path+"joined_fpg")
# 1.6.6
if os.path.exists(path + "joined_test"):
os.remove(path+"joined_test")
# 1.6.7 Check if all files deleted
os.listdir(path)
###Output
_____no_output_____
###Markdown
Read all data
###Code
# 2.0 Read all seven files using pandas
train = pd.read_csv("train.csv")
store = pd.read_csv("store.csv")
weather = pd.read_csv("weather.csv")
# 2.0.1
test = pd.read_csv("test.csv")
# 2.0.2
googletrend = pd.read_csv("googletrend.csv")
state_names = pd.read_csv("state_names.csv")
store_states = pd.read_csv("store_states.csv")
# 2.0.3 Also set options to display all rows/all columns
pd.set_option('display.max_columns', None) # or 1000
pd.set_option('display.max_rows', None) # or 1000
pd.set_option('display.max_colwidth', None) # or 199
# 2.0.4 Check if read
train.shape , test.shape, store.shape, weather.shape, googletrend.shape, state_names.shape, store_states.shape
###Output
_____no_output_____
###Markdown
Explore train data
###Code
# 2.1 Look at train data
print("\n---train----\n")
train.shape # (1017209, 9)
print("\n------train------\n")
train.head()
print("\n-----Summary------\n")
train.describe()
print("\n-----dtypes------\n")
train.dtypes
# 2.2
train['Store'].nunique() # 1115
print()
train['Promo'].nunique() # 2
train['Promo'].unique() # [1,0]
print()
train['Open'].nunique() # 2
train['Open'].unique() # [1,0]
print()
train['SchoolHoliday'].nunique() # 2
train['SchoolHoliday'].unique()
print()
train['StateHoliday'].nunique() # 5
train['StateHoliday'].unique() # ['0', 'a', 'b', 'c', 0]
print()
train['Date'].nunique() # 942
# 2.3 About nulls
# No nulls here
train.isnull().sum()
###Output
_____no_output_____
###Markdown
Explore Store data
###Code
# 3.0 Look at store data
print("\n---shape----\n")
store.shape # (1115, 10)
print("\n------data------\n")
store.head()
print("\n-----Summary------\n")
store.describe()
print("\n-----dtypes------\n")
store.dtypes
# 3.1
store['StoreType'].unique() # ['c', 'a', 'd', 'b']
print()
store['Assortment'].unique() # ['a', 'c', 'b']
print()
store['Promo2'].unique() # [0,1]
print()
np.sort(store['CompetitionOpenSinceYear'].unique()) # Large number of years from 1961, 1990 to 2015
# excluding NaN and 1900
# 3.2 Whenever Promo2 is zero, three other columns carry NaN values
store.loc[store['Promo2'] == 0, ['Promo2','Promo2SinceWeek','Promo2SinceYear', 'PromoInterval']].head(5)
print()
store.isnull().sum()
# 3.3 Three cases where CompetitionDistance is NULL
# CompetitionOpenSinceMonth and CompetitionOpenSinceYear
# are also NULL
store.loc[store['CompetitionDistance'].isnull() , :].head()
###Output
_____no_output_____
###Markdown
Explore weather data
###Code
# 4.0 Look at weather data
print("\n---shape----\n")
weather.shape # (15840, 24)
print("\n------data------\n")
weather.head()
print("\n-----Summary------\n")
weather.describe()
print("\n-----dtypes------\n")
weather.dtypes
# 4.1 Unique values for object features
weather['file'].nunique() # 16
print()
weather['Date'].nunique() # 990 Some dates are repeating for some 'file'
print()
weather['Events'].nunique() # 21 unique weather events
# 4.2 Null values
# Some columns have NULL values
# Max_Gust_SpeedKm_h has very large number of NULL values
# Events: In 3951 cases no Events recorded
weather.isnull().sum().sort_values(ascending = False).head(10)
###Output
_____no_output_____
###Markdown
Google trends and other data Explore googletrends
###Code
# 5.0 Look at googletrend data
print("\n---google trends----\n")
googletrend.shape # (1017209, 9)
print()
googletrend.head() # (15840, 24)
print()
googletrend.dtypes
# 5.1 A little more
googletrend['week'].nunique() # 148
print()
googletrend['file'].nunique() # 14
###Output
_____no_output_____
###Markdown
Split up '*week*' and '*file*' columns of googletrendThis will create new columns. In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use .loc[rows, cols] to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list googletrend.State=='NI' and selecting "State".---
###Code
# 6.0
# Refer" # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html
# String split with 'expand'
googletrend.week.str.split(
' - ', # split on this '_'
expand = True # Each split part is a column now
)[0].head(2) # select the first split part as a column & display
# 6.1 Split 'week' field at ' - '
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
# 6.2 Split 'file' field at '_' and capture the 2nd index as new column
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
# 6.3
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
googletrend.head()
# 6.4 Also check if any 'NI' remaining
np.sum(googletrend.State=='NI') # 0
# 6.4.1
googletrend.head(10)
###Output
_____no_output_____
###Markdown
More explorations
###Code
# 7.0
state_names.head()
print()
state_names.State.nunique() # 16
print()
store_states.head()
print()
store_states['Store'].nunique() # 1115
print()
store_states['State'].nunique() # 12
# 7.1 StateHoliday
train.StateHoliday.dtype
print()
train.StateHoliday[:4]
###Output
_____no_output_____
###Markdown
Transform *StateHoliday* fieldWe turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
###Code
# 7.2 StateHoliday replace by True/False
train.StateHoliday = train.StateHoliday!='0'
train.StateHoliday[:4]
print()
# 7.2.1
test.StateHoliday = test.StateHoliday!='0'
test.StateHoliday[:4]
###Output
_____no_output_____
###Markdown
Date componentsBreak date into its components `add_datepart()` [fastai](https://github.com/fastai/fastai/blob/master/fastai/tabular/core.pyL15) functionThe following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.You should ***always*** consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
###Code
# 8.0
## add_datepart()
# Break every date-field is broken into multiple components:
# ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
# 'Is_month_end', 'Is_month_start', 'Is_quarter_end',
# 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
#
# Source: https://colab.research.google.com/github/duchaba2/fastai_san_ramon_biztech/blob/master/smbt_rossman_data_clean.ipynb#scrollTo=AgmbiE0MR8LE
import re
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
# If date-field in not of 'date' type,
# transform it
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = [
'Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end',
'Is_quarter_start', 'Is_year_end', 'Is_year_start'
]
if time:
attr = attr + ['Hour', 'Minute', 'Second']
# Begin attribute extraction using '.dt. accessor
for n in attr:
df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
# If original date-field is to be dropped, drop it
if drop:
df.drop(fldname, axis=1, inplace=True)
# 8.1 Using above function, breakup
# 'dates' in all cases below
add_datepart(train, "Date", drop=False)
# 8.1.1
add_datepart(test, "Date", drop=False)
# 8.1.2
add_datepart(weather, "Date", drop=False)
# 8.1.3
add_datepart(googletrend, "Date", drop=False)
# 8.1.4
add_datepart(weather, "Date", drop=False)
# 8.2 So what are the revised data shapes
train.shape # (1017209, 22)
print()
test.shape # (41088, 21)
print()
weather.shape # (15840, 37)
print()
googletrend.shape # (2072, 18)
print()
# 8.3 And look at one data
train.head()
###Output
_____no_output_____
###Markdown
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
###Code
# 8.4
googletrend.shape # (2072, 18)
googletrend.head()
print()
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
trend_de.shape # (148, 18)
print()
trend_de.head()
###Output
_____no_output_____
###Markdown
Joining tablesWe have several tables. Merge them, one by one `join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of right on the left argument using the given fields for each table. Pandas does joins using the merge method. The suffixes argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "_y" to those on the right.
###Code
# 9.0 Define a small function for merging tables
# df.merge(right,
# how='inner',
# left_on=None, # Column or index level names to join on in the left DataFrame
# right_on=None, # Column or index level names to join on in the right DataFrame.
# suffixes=('_x', '_y')) # a string indicating the suffix to add to overlapping column name
def join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y'):
# 9.0.1 Both right and left fields would be same
if right_onField is None:
right_onField = left_onField # Both sides are same
# 9.0.2 Return merged table
return leftTable.merge(
rightTable,
how='left', # It is 'left' join AND NOT 'inner' join
# We cannot loose left-side data
left_on=left_onField,
right_on=right_onField,
suffixes=("", suffix) # left side: no suffix,
# right side default is "_y"
# unless mentioned in the function
# call arguments
)
# 9.1 Join1: (weather + state_names)
# join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y')
weather.shape # (15840, 37)
print()
# 9.2 Suffix is "_y":
weather = join_df(weather, # Left df
state_names, # right df
"file", # Ist column of 'weather': Contains state names
"StateName" # Ist column of 'state_name'
)
print()
# 9.3
weather.shape # (15840, 37+2)
# 9.4 Columns 'file' and 'StateName' are the same:
weather.head(2) # Both 'file' and 'StateName' columns have same data
# 9.5 Join2: (store + stire_states)
# join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y')
store.shape # (1115, 10)
print()
store = join_df(store,
store_states,
"Store"
)
print()
store.shape # (1115, 11)
# 9.5.1 Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
# Delete any earlier 'joined'
del joined
# 9.6 join3: (train+store)
# join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y')
train.shape # (1017209, 22)
print()
joined = join_df(train,
store,
"Store"
)
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()]) # (0, 0)
joined.shape # (1017209, 32)
print()
joined_test.shape # (41088, 31)
# 9.6.1 Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
joined.shape # (1017209, 32)
# 9.7 join4: (joined+googletrend)
joined.shape # (1017209, 32)
print()
joined = join_df(
joined,
googletrend,
["State","Year", "Week"] # Left fields to be joined on
)
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined.shape # (1017209, 47)
print()
joined_test.shape # (41088, 46)
# 9.7.1 Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
joined.shape # (1017209, 47)
#9.8
# join5: (joined+trend_de) Note the suffix here. It is NOT the default '_y'
# join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y')
joined.shape # (1017209, 47)
# Use pandas pd.merge here:
joined = joined.merge(
trend_de, # Right table
'left', # type of join
["Year", "Week"], # On these two fields
suffixes=('', '_DE')
)
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()]) #
joined.shape # (1017209, 63)
joined_test.shape # (41088, 62)
# 9.8.1 Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
joined.shape # (1017209, 63)
#9.9
#join6: (joined+weather)
# join_df(leftTable, rightTable, left_onField, right_onField=None, suffix='_y')
joined.shape # (1017209, 63)
joined = join_df(
joined,
weather,
["State","Date"]
)
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()]) # (0,0)
joined.shape # (1017209, 100)
joined_test.shape # (41088, 99)
# 9.9.1 Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
joined.shape # (1017209, 100)
# 9.10 Check
joined.columns
len(joined.columns) # 100
# 9.11 Assign value to 'name' attribute of DataFrame
# We will use it shortly
joined.name = "joined"
joined_test.name = "joined_test"
# 10.0 Remove columns having suffix of '_y'
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns:
print(c,df.name)
df.drop(c, inplace=True, axis=1)
# 10.0.1
# Check point
weather.shape # (15840, 39)
store.shape # (1115, 11)
train.shape # (1017209, 22)
joined.shape # (1017209, 74)
joined_test.shape # (41088, 73)
###Output
_____no_output_____
###Markdown
Missing valuesNext we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data. We will also create some features
###Code
# 11.0 Just explore
joined[['CompetitionOpenSinceYear', 'CompetitionOpenSinceMonth', 'Promo2SinceYear', 'Promo2SinceWeek']].head(5)
joined[['CompetitionOpenSinceYear', 'CompetitionOpenSinceMonth', 'Promo2SinceYear', 'Promo2SinceWeek']].describe()
###Output
_____no_output_____
###Markdown
Create missing values indicator columnsFor any missing value column, before filling it, create another column to indicate location of missing values
###Code
#11.0.1 Create columns that indicates that CompetitionOpenSince is missing
for df in (joined,joined_test):
#Create column that indicates that CompetitionOpenSince is missing
df["CompetitionOpenNA"]=False
df.loc[df.CompetitionOpenSinceYear.isna(),"CompetitionOpenNA"]=True
# 11.0.2 Create columns that indicates that CompetitionDistance is missing
df["CompetitionDistanceNA"]=False
df.loc[df.CompetitionDistance.isna(),"CompetitionDistanceNA"]=True
###Output
_____no_output_____
###Markdown
Fill missing values
###Code
# 11.1
for df in (joined,joined_test):
##AA. 'CompetitionOpenSinceYear' & 'CompetitionOpenSinceMonth'
## 15/01/1900 ==== 354 missing values each
# Fill in year as 1900 (354 missing values)
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
# Fill in month as 1 (354 missing values)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
##BB. 'Promo2SinceYear' and 'Promo2SinceWeek'
# 01/01/1900 ==== 544 missing values
# Fill in year as 1900. (544 missing values)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
# Fill in week as 1. (544 missing values)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
# 11.2 Assume missing CompetitionDistance data is beacuse the competition is to far away to be registered
df.loc[df.CompetitionDistance.isna(),"CompetitionDistance"]= df.CompetitionDistance.max()*2
###Output
_____no_output_____
###Markdown
Create new feature from two 'date' features
###Code
# 12. We create a new feature from these two features
joined[['Date','CompetitionOpenSinceYear','CompetitionOpenSinceMonth'] ].head()
# 12.1 Create two features: One new date feature
# and one 'days' elapsed feature:
for df in (joined,joined_test):
# 12.1 Create a new feature 'CompetitionOpenSince'
# CompetitionOpenSince whose year is CompetitionOpenSinceYear
# and whose month is CompetitionOpenSinceMonth and the
# day is 15 (selected randomly)
df["CompetitionOpenSince"] = pd.to_datetime(
dict
(
year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth,
day=15 # This is an arbitrary selection
)
)
# 12.2 Another feature. Days elapsed from 'CompetitionOpenSince' to current 'date'
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
###Output
_____no_output_____
###Markdown
Data cleaning
###Code
# 12.3 Some cleaning of data:
for df in (joined,joined_test):
# CompetitionDaysOpen feature was just created (See above)
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
# Also before 1990 setting CompetitionDaysOpen as 0
# The earliest CompetitionOpenSinceYear is 1990
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
# 12.3.1
joined.shape # (1017209, 78)
print()
import sys
np.set_printoptions(threshold=sys.maxsize)
np.sort(joined.columns.values)
###Output
_____no_output_____
###Markdown
Save processed data to disk--IThis will also delete all tables. File created `joined`
###Code
# 12.4 Save data to disk
# StackOverflow: https://stackoverflow.com/a/17098736/3282777
path = "/content/drive/MyDrive/Colab_data_files/"
joined.to_pickle(path +"joined")
joined_test.to_pickle(path +"joined_test")
# 12.5 Check saved files:
# joined_test size: 18111235
# joined size: 469951247
!ls -la $path
# 12.5.1 Clear memory for future work
del train
del test
del weather
del googletrend
del joined
del joined_test
###Output
_____no_output_____
###Markdown
Read processed data from disk--IRead file `joined` from disk
###Code
# 12.6 Read saved files
path = "/content/drive/MyDrive/Colab_data_files/"
joined = pd.read_pickle(path +"joined")
joined_test =pd.read_pickle(path +"joined_test")
# 12.6.1 Verify
joined.shape # (1017209, 78)
#joined_test.shape # (41088, 75)
###Output
_____no_output_____
###Markdown
Feature creationCreate more features Elapsed time It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:* Running averages* Time until next event* Time since last eventThis is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly. Let's walk through an example.Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:This will apply to each row with School Holiday:* A applied to every row of the dataframe in order of store and date* Will add to the dataframe the days since seeing a School Holiday* If we sort in the other direction, this will count the days until another holiday.The following figure is from this [link](https://docs.fast.ai/tabular.core.htmladd_elapsed_times):The implementiation details written as a pseudo-code are as given below: Pseudo-code for calculating elapsed time```0. Initializelast_store_Seen = 0last_date_recorded = np.datetime64() 1. Read current store number: csn2. Read current SchoolHoliday value: sh_value3. Read current Date: c_dateBegin Is csn == last_store_seen if NO: last_store_seen = csn after = 0 is sh_value == True if Yes, last_date_recorded = c_date if No---last_date_recorded = np.datetime64() if YES: is sh_value == True if Yes, last_date_recorded = c_date after = 0 if False, after = c_date - last_date_recorded```
###Code
# 13.0 We are NOT using this function. But worth examining.
# The source code of the following function is here:
# https://github.com/fastai/fastai/blob/master/fastai/tabular/core.py#L54
# Usage: https://docs.fast.ai/tabular.core.html#add_elapsed_times
#
# We are NOT using this function, but it is worth trying.
# Other useful functions are at above link.
def add_elapsed_times(df, field_names, date_field, base_field):
"Add in `df` for each event in `field_names` the elapsed time according to `date_field` grouped by `base_field`"
field_names = list((field_names))
#Make sure date_field is a date and base_field a bool
df[field_names] = df[field_names].astype('bool')
make_date(df, date_field)
work_df = df[field_names + [date_field, base_field]]
work_df = work_df.sort_values([base_field, date_field])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After')
work_df = work_df.sort_values([base_field, date_field], ascending=[True, False])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before')
for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]:
work_df[a] = work_df[a].fillna(0).astype(int)
for a,s in zip([True, False], ['_bw', '_fw']):
work_df = work_df.set_index(date_field)
tmp = (work_df[[base_field] + field_names].sort_index(ascending=a)
.groupby(base_field).rolling(7, min_periods=1).sum())
tmp.drop(base_field,1,inplace=True)
tmp.reset_index(inplace=True)
work_df.reset_index(inplace=True)
work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s])
work_df.drop(field_names,1,inplace=True)
return df.merge(work_df, 'left', [date_field, base_field])
###Output
_____no_output_____
###Markdown
Our `get_event_elapsed()` function
###Code
# 13.1 See StackOverflow
# https://stackoverflow.com/a/18215499/3282777
def get_event_elapsed(base_fld, event_fld, dt_fld, df):
"""
Example:
base_fld = 'Store' For every store
fld = 'SchoolHoliday' when this event happens, switch on your stop-watch
The stop-watch adds a day for each passing day, till
next this event happens. At that time reset stop-watch
This is 'After-field'
dt_fld=Date field
How many days, hence, the next event will happen
To calculate this, put data in descending order
date-wise and apply the algorithm, as if for 'After-field'
So for a given 'Store', next 'SchoolHoliday'
has come After how many days since the last
"""
# 13.2 Initialise all variables
day1 = np.timedelta64(1, 'D') # See expt below
last_store_seen = 0 # This store does not exist
last_date_recorded = np.datetime64() # Nat: Not a time (same as None)
after = 0 # Initial value in 'after' field
res = [] # Collection of all values
# as we move forward in time
for csn,sh_value,c_date in zip(df[base_fld].values,df[event_fld].values,df[dt_fld].values):
# Get current store
if csn != last_store_seen:
after = 0
last_store_seen = csn
if sh_value:
last_date_recorded = c_date
else:
last_date_recorded = np.datetime64()
else:
if sh_value:
last_date_recorded = c_date
after =0
else:
"""
StackOverFlow: https://stackoverflow.com/a/18215499/3282777
In the absence of division by day1
we get the following:
numpy.timedelta64(1,'D'),
numpy.timedelta64(2,'D'),
numpy.timedelta64(3,'D'),
numpy.timedelta64(4,'D'),
"""
after = (c_date - last_date_recorded).astype('timedelta64[D]') / day1
res.append(after)
return(res)
###Output
_____no_output_____
###Markdown
Simple experiments
###Code
# 14.0 Here is
print(np.timedelta64(1, 'D')) # 1 days
print(type(np.timedelta64(1, 'D'))) # It is a timedelta type
print(np.datetime64()) # NaT
# 14.1 Create a date-range from 1st March to 16th March
dr = pd.date_range(start = '03/01/2021', end = '03/16/2021').to_list()
# 14.1.1 Create a data-frame
xy = pd.DataFrame(
{
"event" : [1,0,1,1,0,1, 0,0,0,0,1,1, 0,0,1,0] * 2,
"date_fld" : dr * 2,
"base_fld" : [1]* 16 + [2]* 16
}
)
# 14.1.2
xy.shape # (32, 3)
xy.head()
xy.tail()
# 14.1.3 Shuffle the DataFrame rows
xy = xy.sample(frac = 1)
xy.head()
# 14.2 Sort our dataset on ['base_fld','date_fld']
# dates in ascending order
xy = xy.sort_values(['base_fld','date_fld'])
# 14.3 Then apply our function
# get_event_elapsed(grfld, event_fld, dt_fld, df)
# 14.4
xy['after_event']= get_event_elapsed('base_fld','event', 'date_fld', xy )
xy
# 14.4 Sort database with dates in descending order
xy = xy.sort_values(['base_fld','date_fld'], ascending = [True,False])
# 14.5 Now apply
# get_event_elapsed(grfld, event_fld, dt_fld, df)
xy['before_event']= get_event_elapsed('base_fld','event', 'date_fld', xy )
xy
# 15.0 Function on Kaggle
# See https://colab.research.google.com/github/duchaba2/fastai_san_ramon_biztech/blob/master/smbt_rossman_data_clean.ipynb#scrollTo=qdOUyEHcR8Ks
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(xy.base_fld.values,xy[fld].values, xy.date_fld.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
xy[pre+fld] = res
# 15.1 Apply the get_elapsed function
# Compare results. They are the same.
fld = 'event'
xy = xy.sort_values(['base_fld', 'date_fld'])
get_elapsed(fld, 'After')
xy = xy.sort_values(['base_fld', 'date_fld'], ascending=[True, False])
get_elapsed(fld, 'Before')
xy
###Output
_____no_output_____
###Markdown
Now with real dataExperiment finished. We apply above function to events of 'SchoolHoliday', 'Promo', 'StateHoliday'
###Code
# 15.1
# get_event_elapsed(base_fld, event_fld, dt_fld, df)
fld = 'SchoolHoliday'
joined = joined.sort_values(['Store', 'Date'])
joined['after_event_sh'] = get_event_elapsed('Store', fld, 'Date', joined)
joined = joined.sort_values(['Store', 'Date'], ascending=[True, False])
joined['before_event_sh'] = get_event_elapsed('Store', fld, 'Date', joined)
# 15.1.1 Check
joined.shape # (1017209, 78)
# 15.2
fld = 'StateHoliday'
joined = joined.sort_values(['Store', 'Date'])
joined['after_event_st'] = get_event_elapsed('Store', fld, 'Date', joined)
joined = joined.sort_values(['Store', 'Date'], ascending=[True, False])
joined['before_event_st'] = get_event_elapsed('Store', fld, 'Date', joined)
# 15.2.1 Check
joined.shape # (1017209, 80)
# 15.3
fld = 'Promo'
joined = joined.sort_values(['Store', 'Date'])
joined['after_event_pr'] = get_event_elapsed('Store', fld, 'Date', joined)
joined = joined.sort_values(['Store', 'Date'], ascending=[True, False])
joined['before_event_pr'] = get_event_elapsed('Store', fld, 'Date', joined)
# 15.3.1 Check
joined.shape # (1017209, 82)
###Output
_____no_output_____
###Markdown
We're going to set the active index to Date.
###Code
# 16.0
joined = joined.set_index("Date")
joined.shape # (1017209, 81)
joined.head()
###Output
_____no_output_____
###Markdown
Save data to disk--IISave processed data to disk file `joined_p`
###Code
# 16.1 Save data to disk as 'joined_p'
# StackOverflow: https://stackoverflow.com/a/17098736/3282777
path = "/content/drive/MyDrive/Colab_data_files/"
joined.to_pickle(path +"joined_p")
# 16.2 Check saved files
# joined_p size: 512020944
!ls -la $path
###Output
total 1349928
-rw------- 1 root root 2841873 Dec 27 10:04 bioresponse_train.csv.zip
-rw------- 1 root root 69468154 Feb 17 00:53 cats_dogs.tar.gz
-rw------- 1 root root 469951247 Mar 19 05:59 joined
-rw------- 1 root root 512020944 Mar 19 06:00 joined_p
-rw------- 1 root root 18111235 Mar 19 05:59 joined_test
-rw------- 1 root root 20995 May 2 2016 LCDataDictionary.xlsx
-rw------- 1 root root 76166 Mar 13 12:21 metadata.tsv
drwx------ 2 root root 4096 Feb 17 07:15 model
-rw------- 1 root root 2038945 Oct 30 2019 pos.zip
drwx------ 2 root root 4096 Mar 15 13:04 rossmannStoreSales
-rw------- 1 root root 303835737 Feb 21 04:15 talinkigData_out.csv.zip
drwx------ 2 root root 4096 Feb 21 04:46 talkingData
-rw------- 1 root root 3861202 Mar 13 12:21 vectors.tsv
-rw------- 1 root root 84199 Dec 27 06:30 winequality-red.csv
###Markdown
Read processed data--IIFile is `joined_p`
###Code
# 16.3 Read saved files
path = "/content/drive/MyDrive/Colab_data_files/"
joined = pd.read_pickle(path +"joined_p")
joined_test =pd.read_pickle(path +"joined_test")
# 16.4 Check
joined.shape # (1017209, 83)
#joined_test.shape # (41088, 77)
###Output
_____no_output_____
###Markdown
Rolling summaries Simple experiment--IRolling average of prices
###Code
# 17.0
# https://benalexkeen.com/resampling-time-series-data-with-pandas/
stocks_nyse_path = "/content/drive/MyDrive/Colab_data_files/rossmannStoreSales/"
close_px = pd.read_csv(
stocks_nyse_path+"stock_px_2.csv",
parse_dates= True,
index_col = 0 # Make first column as index column
)
# 17.0.1
# Date wise prices for just four tickers
close_px.head()
#17.1 Pandas rolling window.
# Moving averages:
# Summarise last 250 points
# and bring them forward
_=close_px.AAPL.plot()
_=close_px.AAPL.rolling(250).mean().plot()
# 17.2
# Refer: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html
# rolling(
# window, # Size of the moving window.
# # This is the number of observations
# # used for calculating the statistic.
# # Each window will be a fixed size.
# min_periods=None,
# center=False,
# win_type=None,
# on=None,
# axis=0, closed=None)
# 17.3
# To understand rolling forward, let us do one
# simple experiment. Just average last two points
# and create a new column:
close_px['avg2'] = close_px['AAPL'].rolling(2).mean()
close_px['sum2'] = close_px['AAPL'].rolling(2).sum()
close_px[['AAPL', 'avg2','sum2']].head(10)
###Output
_____no_output_____
###Markdown
Simple Experiment--IIGroup based rolling summaries. I have daily data for two Stores. For each store, I want moving separate moving average of last two days:
###Code
# 18.0 Create a date-range from 1st March to 16th March
dr = pd.date_range(start = '03/01/2021', end = '03/16/2021').to_list()
# 18.0.1 Create a data-frame
xy = pd.DataFrame(
{
"SchoolHoliday" : [1,0,1,1,0,1, 0,0,0,0,1,1, 0,0,1,0] * 2,
"StateHoliday" : [0,0,0,1,0,1, 1,0,1,0,1,1, 0,1,0,1] * 2,
"Promo" : [1,1,0,0,1,0,0,1,1,1,0,0,1,0,0,1] * 2,
"date_fld" : dr * 2,
"Store" : [1]* 16 + [2]* 16,
"price" : np.random.normal(loc = 0.24, scale=1, size = (32,) )
}
)
# 18.0.2 Shuffle the DataFrame rows
xy = xy.sample(frac = 1)
xy.head()
###Output
_____no_output_____
###Markdown
To create a rolling window, I must put date field as Index
###Code
# 18.0.3
df = xy.set_index('date_fld')
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Next I perform grouping and perform moving averages. For each group, a separate bag/basket is created and moving average taken within that group.
###Code
# 18.0.4 Here is moving avg and
# an explanation of code
# Just consider these two columns
# Sort index date wise
# Create separate bags for each group
# Within each bag create rolling windows
# Take mean within each window
mov_avg = df[['Store','price']]. \
sort_index(). \
groupby("Store"). \
rolling(2, min_periods=1). \
mean()
# 18.0.5 Check if moving avg is taken:
mov_avg.head()
# 18.0.6 And our data for Store 1,
# sorted by date-index
df.loc[df['Store'] == 1, ['Store', 'price']].sort_index().head()
###Output
_____no_output_____
###Markdown
Simple Experiment--IIIGroup by store and take sum of multiple fields
###Code
# 19.0 Create a date-range from 1st March to 16th March
dr = pd.date_range(start = '03/01/2021', end = '03/16/2021').to_list()
# 19.0.1 Create a data-frame
xy = pd.DataFrame(
{
"SchoolHoliday" : [1,0,1,1,0,1, 0,0,0,0,1,1, 0,0,1,0] * 2,
"StateHoliday" : [0,0,0,1,0,1, 1,0,1,0,1,1, 0,1,0,1] * 2,
"Promo" : [1,1,0,0,1,0,0,1,1,1,0,0,1,0,0,1] * 2,
"date_fld" : dr * 2,
"Store" : [1]* 16 + [2]* 16,
"price" : np.random.normal(loc = 0.24, scale=1, size = (32,) )
}
)
# 19.0.2 Shuffle the DataFrame rows
xy = xy.sample(frac = 1)
xy.head(3)
# 19.0.3 Set date-index
df = xy.set_index('date_fld')
df.shape
df.head(3)
###Output
_____no_output_____
###Markdown
We will now total up holidays in every two-day periods.
###Code
# 19.0.4
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
# 19.0.5
bwd_xy = df[['Store'] +columns].\
sort_index().\
groupby("Store").\
rolling(2, min_periods=1).\
sum()
###Output
_____no_output_____
###Markdown
Note that summation is done, in each bag of 'Store', not only of holidays but also of 'Store'. This is the reason, you observe under Store column, values 1,2. These are not store numbers but moving-summation of values in 'Store' column. Try moving summation of three values and check.
###Code
# 19.0.6 Our data has multiple indexes
bwd_xy.head()
# 19.1 Extract data for Ist index
# https://stackoverflow.com/a/18835121/3282777
xx = bwd_xy.iloc[bwd_xy.index.get_level_values('Store') == 1]
xx.head()
# 19.2 We will drop 'Store' column
xx = xx.drop(columns = ['Store'])
xx.head()
# 19.3 To assist checking of moving summation,
# we will create side-by-side columns of
# actual vs moving summations:
# 19.3.1 Sort data and drop columns not needed
sk = df.loc[df['Store'] == 1, :].sort_index().drop(columns = ['Store', 'price'])
# 19.3.2 Begin adding columns to sk from xx
sk['schoolholiday'] = xx['SchoolHoliday'].values
sk['promo'] = xx['Promo'].values
sk['stateholiday'] = xx['StateHoliday'].values
# 19.3.4 Now check
sk[['SchoolHoliday', 'schoolholiday', 'Promo', 'promo','StateHoliday', 'stateholiday']]
###Output
_____no_output_____
###Markdown
Next moving-summation on real data We'll now use window functions in pandas to calculate rolling quantities. Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in columns in the following week (`rolling()`), grouped by `Store` (groupby()). We do the same in the opposite direction.
###Code
# 20.0 Do rolling store by store
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
#20.1 Weekly summation
bwd = joined[['Store']+columns].\
sort_index().\
groupby("Store").\
rolling(7, min_periods=1).\
sum()
#20.2 Weekly summation
# in opposite direction
fwd = joined[['Store']+columns]. \
sort_index(ascending=False). \
groupby("Store"). \
rolling(7, min_periods=1).\
sum()
# 20.3
bwd.shape # (1017209, 4)
bwd.head()
# 20.3.1
fwd.shape #(1017209, 4)
fwd.head()
###Output
_____no_output_____
###Markdown
Next we will drop the Store indices grouped together in the window function.
###Code
# 20.3.2
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
# 20.3.3
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
Now we'll merge these values onto the 'joined'
###Code
# 21
joined.shape # (1017209, 81)
print()
# 21.1
# Columns ['SchoolHoliday', 'StateHoliday', 'Promo'] in fwd and
# bwd are now renamed as ['SchoolHoliday_bw', 'StateHoliday_bw', 'Promo_bw']
# and ['SchoolHoliday_fwd', 'StateHoliday_fwd', 'Promo_fwd'] as these
# contain moving summations:
# 21.1.1
joined = joined.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
# 21.1.2
joined.shape # (1017209, 85)
print()
# 21.1.3
joined = joined.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
# 21.1.4
joined.shape # (1017209, 88)
print()
###Output
_____no_output_____
###Markdown
And Save again--IIIFile created `joined_fp`
###Code
# 22.0 Save data to disk as 'joined_fp'
# StackOverflow: https://stackoverflow.com/a/17098736/3282777
path = "/content/drive/MyDrive/Colab_data_files/"
joined.to_pickle(path +"joined_fp")
# 22.1 Check saved files.
# joined_fp Size: 574901643
!ls -la $path
###Output
total 1911355
-rw------- 1 root root 2841873 Dec 27 10:04 bioresponse_train.csv.zip
-rw------- 1 root root 69468154 Feb 17 00:53 cats_dogs.tar.gz
-rw------- 1 root root 469951247 Mar 19 05:59 joined
-rw------- 1 root root 574901643 Mar 19 06:01 joined_fp
-rw------- 1 root root 512020944 Mar 19 06:00 joined_p
-rw------- 1 root root 18111235 Mar 19 05:59 joined_test
-rw------- 1 root root 20995 May 2 2016 LCDataDictionary.xlsx
-rw------- 1 root root 76166 Mar 13 12:21 metadata.tsv
drwx------ 2 root root 4096 Feb 17 07:15 model
-rw------- 1 root root 2038945 Oct 30 2019 pos.zip
drwx------ 2 root root 4096 Mar 15 13:04 rossmannStoreSales
-rw------- 1 root root 303835737 Feb 21 04:15 talinkigData_out.csv.zip
drwx------ 2 root root 4096 Feb 21 04:46 talkingData
-rw------- 1 root root 3861202 Mar 13 12:21 vectors.tsv
-rw------- 1 root root 84199 Dec 27 06:30 winequality-red.csv
###Markdown
Read data--IIIRead from `joined_fp`
###Code
# 22.2 Read saved files:
del joined
path = "/content/drive/MyDrive/Colab_data_files/"
joined = pd.read_pickle(path +"joined_fp")
# 22.3 Check
joined.shape # (1017209, 90)
#joined_test.shape # (41088, 77)
###Output
_____no_output_____
###Markdown
Cleaning up and saving Some data-scientists also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the Kaggle competition. One reason for this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
###Code
# 23.0
joined = joined[joined.Sales!=0]
# 23.1
joined.reset_index(inplace=True)
#joined_test.reset_index(inplace=True)
joined.shape # (844338, 91)
###Output
_____no_output_____
###Markdown
Save this data also--IVFile created `joined_fp`
###Code
# 24.0 Save data to disk as 'joined_fp'
# StackOverflow: https://stackoverflow.com/a/17098736/3282777
path = "/content/drive/MyDrive/Colab_data_files/"
joined.to_pickle(path +"joined_fp")
# 24.1 Check saved files.
# joined_fp Size: 471530522
!ls -la $path
###Output
total 1810407
-rw------- 1 root root 2841873 Dec 27 10:04 bioresponse_train.csv.zip
-rw------- 1 root root 69468154 Feb 17 00:53 cats_dogs.tar.gz
-rw------- 1 root root 469951247 Mar 19 05:59 joined
-rw------- 1 root root 471530522 Mar 19 06:02 joined_fp
-rw------- 1 root root 512020944 Mar 19 06:00 joined_p
-rw------- 1 root root 18111235 Mar 19 05:59 joined_test
-rw------- 1 root root 20995 May 2 2016 LCDataDictionary.xlsx
-rw------- 1 root root 76166 Mar 13 12:21 metadata.tsv
drwx------ 2 root root 4096 Feb 17 07:15 model
-rw------- 1 root root 2038945 Oct 30 2019 pos.zip
drwx------ 2 root root 4096 Mar 15 13:04 rossmannStoreSales
-rw------- 1 root root 303835737 Feb 21 04:15 talinkigData_out.csv.zip
drwx------ 2 root root 4096 Feb 21 04:46 talkingData
-rw------- 1 root root 3861202 Mar 13 12:21 vectors.tsv
-rw------- 1 root root 84199 Dec 27 06:30 winequality-red.csv
###Markdown
Read processed data--IVRead finally processed and saved data from disk. File is `joined_fp`
###Code
# 24.2 Read saved files:
# del joined_fp
path = "/content/drive/MyDrive/Colab_data_files/"
joined = pd.read_pickle(path +"joined_fp")
# 24.3 Check saved files.
# joined_fp Size: 471530522
!ls -la $path
# 24.4 Check
joined.shape # (844338, 91)
#joined_test.shape # (41088, 77)
###Output
_____no_output_____
###Markdown
Modeling using fastai
###Code
###Output
_____no_output_____ |
_360-in-525/2018/04/jp/360-in-525-04_02.ipynb | ###Markdown
02. Numbers, Strings, Booleans and Sets [Mathematical Statistical and Computational Foundations for Data Scientists](https://lamastex.github.io/scalable-data-science/360-in-525/2018/04/)©2018 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) Numbers and Arithmetic OperationsWe will start by showing you some of the basic numeric capabilities of SageMath.A worksheet cell is the area enclosed by a gray rectangle. You may type any expression you want to evaluate into a worksheet cell. We have already put some expressions into this worksheet.When you are in a cell you can evaluate the expression in it by pressing or just by clicking the evaluate button below the cell. To start with, we are going to be using SAGE like a hand-held calculator. Let's perform the basic arithmetic operations of addition, subtraction, multiplication, division, exponentiation, and remainder over the three standard number systems: Integers denoted by $\mathbb{Z}$, Rational Numbers denoted by $\mathbb{Q}$ and Real Numbers denoted by $\mathbb{R}$. Let us recall the real number line and the basics of number systems next.
###Code
def showURL(url, ht=500):
"""Return an IFrame of the url to show in notebook with height ht"""
from IPython.display import IFrame
return IFrame(url, width='95%', height=ht)
showURL('https://en.wikipedia.org/wiki/Number',400)
###Output
_____no_output_____
###Markdown
The most basic numbers are called natural numbers and they are denoted by $\mathbb{N} :=\{0, 1,2,3,\ldots\}$. See [https://en.wikipedia.org/wiki/Natural_number](https://en.wikipedia.org/wiki/Natural_number).> The natural numbers are the basis from which many other number sets may be built by extension: the integers, by including (if not yet in) the neutral element 0 and an additive inverse (−n) for each nonzero natural number n; the rational numbers, by including a multiplicative inverse (1/n) for each nonzero integer n (and also the product of these inverses by integers); the real numbers by including with the rationals the limits of (converging) Cauchy sequences of rationals; the complex numbers, by including with the real numbers the unresolved square root of minus one (and also the sums and products thereof); and so on. These chains of extensions make the natural numbers canonically embedded (identified) in the other number systems.
###Code
showURL("https://en.wikipedia.org/wiki/Natural_number#Notation",300)
###Output
_____no_output_____
###Markdown
Let us get our fingers dirty with some numerical operations in SageMath. Note that anything after a '' symbol is a comment - comments are ignored by SAGE but help programmers to know what's going on. Example 1: Integer ArithmeticTry evaluating the cell containing 1+2 below by placing the cursor in the cell and pressing .
###Code
1+2 # one is being added to 2
###Output
_____no_output_____
###Markdown
Now, modify the above expression and evaluate it again. Try 3+4, for instance.
###Code
3-4 # subtracting 4 from 3
###Output
_____no_output_____
###Markdown
The multiplication operator is `*`, the division operator is `/`.
###Code
2*6 # multiplying 2 by 6
15/5 # dividing 15 by 5
type(1)
###Output
_____no_output_____
###Markdown
The exponentiation operator is `^`.
###Code
2^3 # exponentiating 2 by 3, i.e., raising 2 to the third power
###Output
_____no_output_____
###Markdown
However, Python's exponentiation operator `**` also works.
###Code
2**3
###Output
_____no_output_____
###Markdown
Being able to finding the remainder after a division is surprisingly useful in computer programming.
###Code
11%3 # remainder after 11 is divided by 3; i.e., 11=3*3+2
###Output
_____no_output_____
###Markdown
Another way of referring to this is 11 modulus 3, which evaluates to 2. Here `%` is the modulus operator. You tryTry typing in and evaluating some expressions of your own. You can get new cells above or below an existing cell by clicking 'Insert' in the menu above and 'Insert Cell Above' or 'Insert Cell below'. You can also place the cursor at an existing cell and click `+` icon above to get a new cell below. What happens if you put space between the characters in your expression, like:`1 + 2` instead of `1+2`?. Example 2: Operator Precedence for Evaluating Arithmetic ExpressionsSometimes we want to perform more than one arithmetic operation with some given integers. Suppose, we want to - "divide 12 by 4 then add the product of 2 and 3 and finally subtract 1." Perhaps this can be achieved by evaluating the expression "12/4+2*3-1"?But could that also be interpreted as - "divide 12 by the sum of 4 and 2 and multiply the result by the difference of 3 and 1"?In programming, there are rules for the order in which arithmetic operations are carried out. This is called the order of precedence.The basic arithmetic operations are: +, -, *, %, /, ^. The order in which operations are evaluated are as follows:- ^ Exponents are evaluated right to left- *, %, / Then multiplication, remainder and division operations are evaluated left to right- +, - Finally, addition and subtraction are evaluated left to rightWhen operators are at the same level in the list above, what matters is the evaluation order (right to left, or left to right). Operator precedence can be forced using parenthesis.
###Code
showURL("https://en.wikipedia.org/wiki/Order_of_operations", 300)
(12/4) + (2*3) - 1 # divide 12 by 4 then add the product of 2 and 3 and finally subtract 1
12/4+2*3-1 # due to operator precedence this expression evaluates identically to the parenthesized expression above
###Output
_____no_output_____
###Markdown
Operator precedence can be forced using nested parentheses. When our expression has nested parenthesis, i.e., one pair of parentheses inside another pair, the expression inside the inner-most pair of parentheses is evaluated first.
###Code
(12/(4+2)) * (3-1) # divide 12 by the sum of 4 and 2 and multiply the result by the difference of 3 and 1
###Output
_____no_output_____
###Markdown
You tryTry writing an expression which will subtract 3 from 5 and then raise the result to the power of 3. Find out for yourself what we mean by the precedence for exponentiation (^) being from right to left: - What do you think the expression `3^3^2` would evaluate to? - Is it the same as `(3^3)^2`, i.e., `27` squared, or - `3^(3^2)`, i.e., `3` raised to the power `9`? Try typing in the different expressions to find out: Find an expression which will add the squares of four numbers together and then divide that sum of squares by 4. Find what the precedence is for the modulus operator `%` that we discussed above: try looking at the difference between the results for `10%2^2` and `10%2*2` (or `10^2+2`). Can you see how SageMath is interpreting your expressions? Note that when you have two operators at the same precedence level (like `%` and `*`), then what matters is the order - left to right or right to left. You will see this when you evaluate `10%2*2`. Does putting spaces in your expression make any difference? Using parenthesis or white spaces can improve readability a lot! So be generous with them.
###Code
10^2+2^8-4
10^2 + 2^8 -4
(((10^2) + (2^8)) - 4)
###Output
_____no_output_____
###Markdown
The lesson to learn is that it is always good to use the parentheses: you will make it clear to someone reading your code what you mean to happen as well as making sure that the computer actually does what you mean it to!Try this 10 minutes-long videos to get some practice if you are really rusty with order of operations:* [Khan Academy Order of operations - https://www.youtube.com/watch?v=ClYdw4d4OmA](https://www.youtube.com/watch?v=ClYdw4d4OmA) Example 3: Rational ArithmeticSo far we have been dealing with integers. Integers are a type in SAGE. Algebraically speaking, integers, rational numbers and real numbers form a *ring*. This is something you will learn in detail in a maths course in Group Theory or Abstract Algebra, but let's take a quick peek at the definition of a ring.
###Code
showURL("https://en.wikipedia.org/wiki/Ring_(mathematics)#Definition_and_illustration",400)
type(1) # find the data type of 1
###Output
_____no_output_____
###Markdown
The output above tells us that `1` is of type `sage.rings.integer.Integer`.
###Code
showURL("https://en.wikipedia.org/wiki/Integer",400)
###Output
_____no_output_____
###Markdown
However, life with only integers denoted by $\mathbb{Z} := \{\ldots,-3,-2,-1,0,1,2,3,\ldots\}$ is a bit limited. What about values like $1/2$ or $\frac{1}{2}$?This brings us to the rational numbers denoted by $\mathbb{Q}$.
###Code
showURL("https://en.wikipedia.org/wiki/Rational_number",400)
type(1/2) # data type of 1/2 is a sage.rings.rational.Rational
###Output
_____no_output_____
###Markdown
Try evaluating the cell containing `1/2 + 2` below.
###Code
1/2 + 2 # add one half to 2 or four halves to obtain the rational number 5/2 or five halves
###Output
_____no_output_____
###Markdown
SageMath seems to have done rational arithmetic for us when evaluating the above expression. Next, modify the expression in the cell below and evaluate it again. Try `1/3+2/4`, for instance.
###Code
1/2 + 1/3
###Output
_____no_output_____
###Markdown
You can do arithmetic with rationals just as we did with integers.
###Code
3/4 - 1/4 # subtracting 3/4 from 1/4
1/2 * 1/2 # multiplying 1/2 by 1/2
(2/5) / (1/5) # dividing 2/5 by 1/5
(1/2)^3 # exponentiating 1/2 by 3, i.e., raising 1/2 to the third power
###Output
_____no_output_____
###Markdown
You tryWrite an expression which evaluates to `1` using the rationals `1/3` and `1/12`, some integers, and some of the arithmetical operators - there are lots of different expressions you can choose, just try a few. What does SageMath do with something like `1/1/5`? Can you see how this is being interpreted? What should we do if we really want to evaluate `1` divided by `1/5`?
###Code
1/1/5
###Output
_____no_output_____
###Markdown
Try adding some rationals and some integers together - what type is the result? Example 4: Real Arithmetic (multi-precision floating-point arithmetic)Recall that real numbers denoted by $\mathbb{R}$ include natural numbers ($\mathbb{N}$), integers ($\mathbb{Z}$), rational numbers ($\mathbb{Q}$) and various types of irrational numbers like:- the square root of 2 or $\sqrt{2}$- [Pi](https://en.wikipedia.org/wiki/Pi) or $\pi$ and - Euler's number $e$ and - [Euler–Mascheroni constant](https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant) $\gamma$. Real numbers can be thought of as all the numbers in the real line between negative infinity and positive infinity. Real numbers are represented in decimal format, for e.g. 234.4677878.
###Code
showURL("https://en.wikipedia.org/wiki/Real_number#Definition",400)
###Output
_____no_output_____
###Markdown
We can do arithmetic with real numbers, actually with [http://www.mpfr.org/](http://www.mpfr.org/)'s multiprecision [floating-point numbers](http://en.wikipedia.org/wiki/Floating_point), and can combine them with integer and rational types in SageMath. *Technical note:* Computers can be made to exactly compute in integer and rational arithmetic. But, because computers with finite memory (all computers today!) cannot represent the [uncountably infinitely many real numbers](http://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument), they can only mimic or approximate arithmetic over real numbers using finitely many computer-representable floating-point numbers.See [SageMath Quick Start on Numerical Analysis](http://doc.sagemath.org/html/en/prep/Quickstarts/NumAnalysis.html) to understand SageMath's multiprecision real arithmetic.For now, let's compare the results of evaluating the expressions below to the equivalent expressions using rational numbers above.
###Code
type(0.5) # data type of 0.2 is a sage.rings.real_mpfr.RealLiteral
RR # Real Field with the default 53 bits of precision
RR(0.5) # RR(0.5) is the same as 0.5 in SageMath
0.5 + 2 # one half as 0.5 is being added to 2 to obtain the real number 2.500..0 in SageMath
0.75 - 0.25 # subtracting 0.75 from 0.25 is the same as subtracting 0.75 from 1/4
0.5 * 0.5 # multiplying 0.5 by 0.5 is the same as 1/2 * 1/2
(2 / 5.0) / 0.2 # dividing 2/5. by 0.2 is the same as (2/5) / (1/5)
0.5^3.0 # exponentiating 0.5 by 3.0 is the same as (1/2)^3
###Output
_____no_output_____
###Markdown
You tryFind the type of `1/2`. Try a few different ways of getting the same result as typing `((((1/5) / (1/10)) * (0.1 * 2/5) + 4/100))*5/(3/5)` - this exact expression has already been put in for you in the cell below you could try something just using floating point numbers. Then see how important the parentheses are around rationals when you have an expression like this - try taking some of the parenthesis out and just play with complex expressions like these to get familiar.
###Code
((((1/5) / (1/10)) * (0.1 * 2/5) + 4/100))*5/(3/5)
((((1/5) / (1/10)) * (1/10 * 2/5) + 4/100))*5/(3/5)
###Output
_____no_output_____
###Markdown
Example 5: Variables and assignments of numbers and expressionsLoosely speaking one can think of a *variable* as a way of referring to a memory location used by a computer program. A variable is a symbolic name for this physical location. This memory location contains values, like numbers, text or more complicated types and crucially *what is contained in a variable can change* based on operations we do to it.In SageMath, the symbol `=` is the *assignment operator*. You can assign a numerical value to a *variable* in SageMath using the assignment operator. This is a good way to store values you want to use or modify later. (If you have programmed before using a a language like C or C++ or Java, you'll see that SageMath is a bit different because in SageMath you don't have to say what type of value is going to be assigned to the variable.)Feel free to take a deeper dive into the computer science concept of assignment.
###Code
a = 1 # assign 1 to a variable named a
a # disclose a - you need to explicitly do this!
###Output
_____no_output_____
###Markdown
Just typing the name of a variable to get the value works in the SageMath Notebook, but if you are writing a program and you want to output the value of a variable, you'll probably want to use something like the print command.
###Code
print(a)
b = 2
c = 3
print a, b, c # print out the values of a and b and c
x=2^(1/2)
x
type(x) # x is a sage symbolic expression
###Output
_____no_output_____
###Markdown
Many of the commands in SageMath/Python are "methods" of objects.That is, we access them by typing:- the name of the mathematical object,- a dot/period,- the name of the method, and- parentheses (possibly with an argument).This is a huge advantage, once you get familiar with it, because it allows you to do only the things that are possible, and all such things. See [SageMath programming guide for more details on this](http://doc.sagemath.org/html/en/prep/Programming.htmlmethods-and-dot-notation).Let's try to hit the Tab button after the `.` following `x` below to view all available methods for `x` which is currently `sqrt(2)`.
###Code
x. # hit the Tab button after the '.' following 'x'
help(x.n)
# we can use ? after a method to get breif help
x.n(digits=10) # this gives a numerical approximation for x
s = 1; t = 2; u = 3;
print s + t + u
f=(5-3)^(6/2)+3*(7-2) # assign the expression to f
f # disclose f
type(f)
###Output
_____no_output_____
###Markdown
You tryTry assigning some values to some variables - you choose what values and you choose what variable names to use. See if you can print out the values you have assigned. You can reassign different values to variable names. Using SageMath you can also change the type of the values assigned to the variable (not all programming languages allow you to do this).
###Code
a = 1
print "Right now, a =", a, "and is of type", type(a) # using , and strings in double quotes print can be more flexible
a = 1/3 # reassign 1/3 to the variable a
print "Now, a =", a, "and is of type", type(a) # note the change in type
###Output
Right now, a = 1 and is of type <type 'sage.rings.integer.Integer'>
Now, a = 1/3 and is of type <type 'sage.rings.rational.Rational'>
###Markdown
You tryAssign the value `2` to a variable named `x`.On the next line down in the same cell, assign the value `3` to a variable named `y`.Then (on a third line) put in an expression which will evaluate `x + y`
###Code
x=2
y = 3
x+y
###Output
_____no_output_____
###Markdown
Now try reassigning a different value to x and re-evaluating x + y
###Code
x=4
x+y
###Output
_____no_output_____
###Markdown
Example 6: StringsVariables can be strings (an not just numbers). Anything you put inside quote marks will be treated as a string by SageMath/Python.Strings as `str` and `unicode` are built-in [sequence types](https://docs.python.org/2/library/stdtypes.htmlsequence-types-str-unicode-list-tuple-bytearray-buffer-xrange) for storing strings of bytes and unicode-encoded characters and and operating over them.
###Code
myStr = "this is a string" # assign a string to the variable myStr
myStr # disclose myStr
type(myStr) # check the type for myStr
###Output
_____no_output_____
###Markdown
You can also create a string by enclosing them in single quotes or three consecutive single quotes. In SageMath/Python a character (represented by the `char` type in languages like C/C++/Scala) is just a string made up of one character.
###Code
myStr = 'this is a string' # assign a string to the variable myStr using single quotes
myStr # disclose myStr
###Output
_____no_output_____
###Markdown
You can assign values to more than one variable on the same line, by separating the assignment expressions with a semicolon `;`. However, it is usually best not to do this because it will make your code easier to read (it is hard to spot the other assignments on a single line after the first one).
###Code
myStr = '''this is a string''' # assign a string to the variable myStr using three consecutive single quotes
myStr # disclose myStr
###Output
_____no_output_____
###Markdown
Using triple single quotes is especially useful if your string has single or double quotes within it. Triple quotes are often used to create `DocString` to document code in Pyhton/SageMath.
###Code
myStrContainingQuotes = '''this string has "a double quoted sub-string" and some escaped characters: \,', - all OK!'''
myStrContainingQuotes
###Output
_____no_output_____
###Markdown
Str and unicode StringsIn Python/SageMath, we need to be extremely careful with strings.The type 'str' is actually a sequence of bytes while the unicode string of type `unicode` is a sequence of unicode characters (some of which can be more than a byte in size). See [this](http://pgbovine.net/unicode-python.htm) for an nice clarification of ASCII and unicode (utf-8) encoded strings. So, it is a good habit to convert strings from natural languages that are meant for processing into unicode strings using the `decode(utf-8)` method right away.
###Code
x = 'hi猫' # this is hi (each letter is encoded by one byte) followed by the Chinese character for cat (3 bytes)
type(x) # x is of type str = sequence of bytes in Python2 / SageMath
len(x) # this is a sequence of five hexadecimal numbers each requiring a byte to represent
###Output
_____no_output_____
###Markdown
Disclosing `x` below only shows the hexa-decimal numbers `68` `69` `e7` `8c` `ab`, but only `h` for `68` and `i` for `69` from [ASCII table](http://www.asciitable.com/), are displayed as characters here,while `\xe7\x8c\xab` are shown as hexadecimal numbers with prefix `\x` instead of the Chinese character for cat: 猫
###Code
x
print(x) # printing a string displays the desired if the display is unicode-compatible
###Output
hi猫
###Markdown
Generally it is safe safe to convert strings from natural languages to unicode in Python/SageMath.
###Code
y = x.decode('utf-8') # this decodes or converts the sequence of bytes to a sequence of unicode characters
type(y) # the type of y now is unicode
len(y) # now we have a sequence of just 3 unicode characters as we want
###Output
_____no_output_____
###Markdown
Disclosing `y` shows the two ASCII character `h` and `i` and the Chinese cat character 猫 is specified by the corresponding entry in [utf-8 table](https://en.wikipedia.org/wiki/UTF-8).
###Code
y # output prepended by u shows it is a unicode sequence as opposed to a str which is a byte sequence
print y
###Output
hi猫
###Markdown
When programmatically processing sequences of unicode characters it is much safer to work with `repr` for the canonical string representation of the object.
###Code
?repr # gives the canonical string representation of the object
print repr(y)
print repr(y).decode('unicode_escape')
###Output
u'hi猫'
###Markdown
Pride and Prejudice as unicodeWe will explore frequencies of strings for the most downloaded book at [Project Gutenberg](http://www.gutenberg.org/ebooks/search/?sort_order=downloads) that publishes public domain books online.Currently, books published before 1923 are in the *public domain* - meaning anyone has the right to copy or use the text in any way. Pride and Prejudice by Jane Austin had the most number of downloads and it's available from - [http://www.gutenberg.org/ebooks/1342](http://www.gutenberg.org/ebooks/1342).A quick exploration allows us to see the utf-encoded text [here](http://www.gutenberg.org/files/1342/1342-0.txt).For now, we will just show how to download the most popular book from the project and display it's contents for processing down the road.
###Code
# this downloads the unicode text of the book from the right url we found at the Gutenberg Project
# and assigns it to a variable named prideAndPrejudiceRaw
from urllib import *
prideAndPrejudiceRaw = urlopen('http://www.gutenberg.org/files/1342/1342-0.txt').read().decode('utf-8')
prideAndPrejudiceRaw[0:1000] # just showing the first 1000 raw characters of the downloaded book as unicode
type(prideAndPrejudiceRaw) # this is a sequence of utf-8-encoded characters
len(prideAndPrejudiceRaw) # the length of the unicode string is about 700 thousand unicode characters
###Output
_____no_output_____
###Markdown
Next we will show how trivial it is to "read" all the chapters into SageMath/Python using these steps:- we use regular expressions via the `re` library to substitue all occurences of white-space characters like one or more consecutive end-of-line, tabs, white space characters, etc with a single white space, - we split by 'Chapter ' into multiple chapters in a list- print the first 100 character in each of the first 10 Chapters(don't worry about the details now - we will revist these in detail later)
###Code
import re
# make a list of chapters
chapterList = re.sub('\\s+', ' ',prideAndPrejudiceRaw).split('Chapter ')[1:10]
for chapter in chapterList:
print repr(chapter[0:100]).decode('unicode_escape'), '\n';
###Output
u'1 It is a truth universally acknowledged, that a single man in possession of a good fortune, must be'
u'2 Mr. Bennet was among the earliest of those who waited on Mr. Bingley. He had always intended to vi'
u'3 Not all that Mrs. Bennet, however, with the assistance of her five daughters, could ask on the sub'
u'4 When Jane and Elizabeth were alone, the former, who had been cautious in her praise of Mr. Bingley'
u'5 Within a short walk of Longbourn lived a family with whom the Bennets were particularly intimate. '
u'6 The ladies of Longbourn soon waited on those of Netherfield. The visit was soon returned in due fo'
u"7 Mr. Bennet's property consisted almost entirely in an estate of two thousand a year, which, unfort"
u"8 At five o'clock the two ladies retired to dress, and at half-past six Elizabeth was summoned to di"
u"9 Elizabeth passed the chief of the night in her sister's room, and in the morning had the pleasure "
###Markdown
As we learn more we will return to this popular book's unicode. Assignment Gotcha!Let's examine the three assignments in the cell below.The first assignment of `x=3` is standard: Python/SageMath chooses a memory location for `x` and saves the integer value `3` in it. The second assignment of `y=x` is more interesting and *Pythonic*: Instead of finding another location for the variable `y` and copying the value of `3` in it, Python/SageMath differs from the ways of C/C++. Since both variables will have the same value after the assignment, Python/SageMath lets `y` point to the memory location of `x`.Finally, after the third assignment of `y=2`, `x` will be NOT be changed to `2` as because the behavior is not that of a C-pointer. Since `x` and `y` will not share the same value anymore, `y` gets its own memory location, containing `2` and `x` sticks to the originally assigned value `3`.
###Code
x=3
print(x) # x is 3
y=x
print(x,y) # x is 3 and y is
y=2
print(x,y)
###Output
3
(3, 3)
(3, 2)
###Markdown
As every instance (object or variable) has an identity or `id()`, i.e. an integer which is unique within the script or program, we can use `id()` to understand the above behavior of Python/SageMath assignments.So, let's have a look at our previous example and see how the identities change with the assignments.
###Code
x = 3
print('x and its id() are:')
print(x,id(x))
y = x
print('\ny and its id() are:')
print(y,id(y))
y = 2
print('\nx, y and their id()s are:')
print(x,y,id(x),id(y))
###Output
x and its id() are:
(3, 139843823135168)
y and its id() are:
(3, 139843823135168)
x, y and their id()s are:
(3, 2, 139843823135168, 139843814033952)
###Markdown
Example 6: Truth statements and Boolean valuesConsider statements like "Today is Friday" or "2 is greater than 1" or " 1 equals 1": statements which are either true or not true (i.e., false). SageMath has two values, True and False which you'll meet in this situation. These value are called Booleans values, or values of the type Boolean.In SageMath, we can express statements like "2 is greater than 1" or " 1 equals 1" with relational operators, also known as value comparison operators. Have a look at the list below.- `<` Less than- `>` Greater than- `<=` Less than or equal to- `>=` Greater than or equal to- `==` Equal to. - `!=` Not equal toLets try some really simple truth statements.
###Code
1 < 1 # 1 is less than 1
###Output
_____no_output_____
###Markdown
Let us evaluate the following statement.
###Code
1 <= 1 # 1 is less than or equal to 1
###Output
_____no_output_____
###Markdown
We can use these operators on variables as well as on values. Again, try assigning different values to `x` and `y`, or try using different operators, if you want to.
###Code
x = 1 # assign the value 1 to x
y = 2 # assign the value 2 to y
x == y # evaluate the truth statement "x is equal to y"
###Output
_____no_output_____
###Markdown
Note that when we check if something equals something else, we use `==`, a double equals sign. This is because `=`, a single equals sign, is the assignment operator we talked about above. Therefore, to test if `x` equals `y` we can't write `x = y` because this would assign `y to x`, instead we use the equality operator `==` and write `x == y`.We can also assign a Boolean value to a variable.
###Code
# Using the same x and y as above
myBoolean = (x == y) # assign the result of x == y to the variable myBoolean
myBoolean # disclose myBoolean
type(myBoolean) # check the type of myBoolean
###Output
_____no_output_____
###Markdown
If we want to check if two things are not equal we use `!=`. As we would expect, it gives us the opposite of testing for equality:
###Code
x != y # evaluate the truth statement "x is not equal to y"
print(x,y) # Let's print x and y to make sure the above statement makes sense
###Output
(1, 2)
###Markdown
You tryTry assigning some values to two variables - you choose what values and you choose what variable names to use. Try some truth statements to check if they are equal, or one is less than the other. You tryTry some strings (we looked at strings briefly in Example 5 above). Can you check if two strings are equal? Can you check if one string is less than (`<`) another string. How do you think that Sage is ordering strings (try comparing "fred" and "freddy", for example)?
###Code
'raazb' <= 'raaza'
x = [1]
y = x
y[0] = 5
print x
print(x,id(x),y,id(y))
###Output
[5]
([5], 139843849399112, [5], 139843849399112)
###Markdown
Example 7: Mathematical constantsSage has reserved words that are defined as common mathematical constants. For example, `pi` and `e` behave as you expect. Numerical approximations can be obtained using the `.n()` method, as before.
###Code
print pi, "~", pi.n() # print a numerical approximation of the mathematical constant pi
print e, "~", e.n() # print a numerical approximation of the mathematical constant e
print I, "~", I.n() # print a numerical approximation of the imaginary number sqrt(-1)
(i).n(digits=200) # print the first 1000 digits of pi/e
e^(i*pi)+1 # Euler's identity symbolically - see https://en.wikipedia.org/wiki/Euler%27s_identity
###Output
_____no_output_____
###Markdown
Example 8: SageMath number types and Python number typesWe showed how you can find the type of a number value and we demonstrated that by default, SageMath makes 'real' numbers like 3.1 into Sage real literals (`sage.rings.real_mpfr.RealLiteral`). If you were just using Python (the programming language underlying most of SageMath) then a value like 3.1 would be a floating point number or float type. Python has some interesting extra operators that you can use with Python floating point numbers, which also work with the Sage rings integer type but not with Sage real literals.
###Code
X = 3.1 # convert to default Sage real literal 3.1
type(X)
X = float(3.1) # convert the default Sage real literal 3.1 to a float 3.1
type(X)
###Output
_____no_output_____
###Markdown
Floor Division (`//`) - The division of operands where the result is the quotient in which the digits after the decimal point are removed - the result is floored, i.e., rounded towards negative infinity: examples: 9//2 = 4 and 9.0//2.0 = 4.0, -11//3 = -4, -11.0//3 = -4.0
###Code
3 // 2 # floor division
3.3 // 2.0 # this will give error - floor division is undefined for Sage real literals
float(3.5) // float(2.0)
###Output
_____no_output_____
###Markdown
Similarly, we have the light-weight Python integer type `int` that we may want instead of SageMath integer type for non-mathematical operations.
###Code
type(3) # the default Sage rings integer type
X = int(3) # conversion to a plain Python integer type
type(X)
3/2 # see the result you get when dividing one default Sage rings integer type by another
###Output
_____no_output_____
###Markdown
One of the differences of SageMath rings integers to plain Python integers is that result of dividing one SageMath rings integer by another is a rational. This probably seems very sensible, but it is not what happens at the moment with Python integers.
###Code
int(7)/int(2) # division using python integers is "floor division"
###Output
_____no_output_____
###Markdown
We showed the `.n()` method. If X is some Sage real literal and we use `X.n(20)` we will be asking for 20 bits of precision, which is about how many bits in the computer's memory will be allocated to hold the number. If we ask for `X.n(digits=20)` will be asking for 20 digits of precision, which is not the same thing. Also note that 20 digits of precision does not mean showing the number to 20 decimal places, it means all the digits including those in front of the decimal point.
###Code
help(n) # always ask for help when you need it - or lookup in help menu above
X=3.55555555
X.n(digits = 3)
X.n(3) # this will use 3 bits of precision
round(X,3)
?round # this opens a window with help information that can be closed
###Output
_____no_output_____
###Markdown
If you want to actually round a number to a specific number of decimal places, you can also use the round(...) function.For deeper dive see documents on [Python Numeric Types](https://docs.python.org/2/library/stdtypes.htmlnumeric-types-int-float-long-complex) and [SageMath Numeric Types]() SetsSet theory is at the very foundation in modern mathematics and is necessary to understand the mathematical notions of probability and statistics. We will take a practical mathemtical tour of the essential concepts from set theory that a data scientist needs to understand and build probabilistic models from the data using statistical principles.
###Code
showURL("https://en.wikipedia.org/wiki/Set_(mathematics)",500)
###Output
_____no_output_____
###Markdown
Essentials of Set Theory for Probability and StatisticsLet us learn or recall elementary set theory. Sets are perhaps the most fundamental concept in mathematics. Definitions**Set** *is a collection of distinct elements*. We write a set by enclosing its elements with curly brackets. Let us see some example next.- The collection of $\star$ and $\circ$ is $\{\star,\circ\}$.- We can name the set $\{\star,\circ\}$ by the letter $A$ and write $$A=\{\star,\circ\}.$$- Question: Is $\{\star,\star,\circ\}$ a set?- A set of letters and numbers that I like is $\{b,d,6,p,q,9\}$.- The set of first five Greek alphabets is $\{\alpha,\beta,\gamma,\delta,\epsilon\}$.The set that contains no elements is the **empty set**. It is denoted by $$\boxed{\emptyset = \{\}} \ .$$We say an element belongs to or does not belong to a set with the binary operators $$\boxed{\in \ \text{or} \ \notin} \ .$$ For example,- $\star \in \{\star,\circ\}$ but the element $\otimes \notin \{\star,\circ\}$- $b \in \{b,d,6,p,q,9\}$ but $8 \notin \{b,d,6,p,q,9\}$- Question: Is $9 \in \{3,4,1,5,2,8,6,7\}$?We say a set $C$ is a **subset** of a set $D$ and write$$\boxed{C \subset D}$$if every element of $C$ is also an element of $D$. For example,- $\{\star\} \subset \{\star,\circ\}$- Question: Is $\{6,9\}\subset \{b,d,6,p,q,9\}$? Set OperationsWe can add distinct new elements to an existing set by **union** operation denoted by $\cup$ symbol. For example- $\{\circ, \bullet\} \cup \{\star\} = \{\circ,\bullet,\star\}$- Question: $\{\circ, \bullet\} \cup \{\bullet\} = \quad$?More formally, we write the union of two sets $A$ and $B$ as $$\boxed{A \cup B = \{x: x \in A \ \text{or} \ x \in B \}} \ .$$The symbols above are read as *$A$ union $B$ is equal to the set of all $x$ such that $x$ belongs to $A$ or $x$ belongs to $B$* and simply means that $A$ union $B$ or $A \cup B$ is the set of elements that belong to $A$ or $B$.Similarly, the **intersection** of two sets $A$ and $B$ written as $$\boxed{A \cap B = \{x: x \in A \ \text{and} \ x \in B \}} $$ means $A$ intersection $B$ is the set of elements that belong to both $A$ and $B$.For example- $\{\circ, \bullet\} \cap \{\circ\} = \{\circ\}$- $\{\circ, \bullet\} \cap \{\bullet\} = \{\bullet\}$- $\{\circ\} \cap \{a,b,c,d\}=\emptyset$The **set difference** of two sets $A$ and $B$ written as $$\boxed{A \setminus B = \{x: x \in A \ \text{and} \ x \notin B \}} $$ means $A \setminus B$ is the set of elements that belong to $A$ and not belong to $B$.For example- $\{\circ, \bullet\} \setminus \{\circ\} = \{\bullet\}$- $\{\circ, \bullet\} \setminus \{\bullet\} = \{\circ\}$- $\{a,b,c,d\} \setminus \{a,b,c,d\}=\emptyset$The equality of two sets $A$ and $B$ is defined in terms of subsets as follows: $$\boxed{A = B \quad \text{if and only if} \quad A \subset B \ \text{and} \ B \subset A} \ .$$Two sets $A$ anb $B$ are said to be **disjoint** if $$\boxed{ A \cap B = \emptyset} \ .$$Given a **universal set** $\Omega$, we define the **complement** of a subset $A$ of the universal set by $$\boxed{A^c = \Omega \setminus A = \{x: x \in \Omega \ \text{and} \ x \notin A\}} \ .$$ An Interactive Venn DiagramLet us gain more intuition by seeing the unions and intersections of sets interactively. The following interact is from [interact/misc](https://wiki.sagemath.org/interact/miscAnInteractiveVennDiagram) page of Sage Wiki.
###Code
# ignore this code for now and focus on the interact in the output cell
def f(s, braces=True):
t = ', '.join(sorted(list(s)))
if braces: return '{' + t + '}'
return t
def g(s): return set(str(s).replace(',',' ').split())
@interact
def _(X='1,2,3,a', Y='2,a,3,4,apple', Z='a,b,10,apple'):
S = [g(X), g(Y), g(Z)]
X,Y,Z = S
XY = X & Y
XZ = X & Z
YZ = Y & Z
XYZ = XY & Z
pretty_print(html('<center>'))
pretty_print(html("$X \cap Y$ = %s"%f(XY)))
pretty_print(html("$X \cap Z$ = %s"%f(XZ)))
pretty_print(html("$Y \cap Z$ = %s"%f(YZ)))
pretty_print(html("$X \cap Y \cap Z$ = %s"%f(XYZ)))
pretty_print(html('</center>'))
centers = [(cos(n*2*pi/3), sin(n*2*pi/3)) for n in [0,1,2]]
scale = 1.7
clr = ['yellow', 'blue', 'green']
G = Graphics()
for i in range(len(S)):
G += circle(centers[i], scale, rgbcolor=clr[i],
fill=True, alpha=0.3)
for i in range(len(S)):
G += circle(centers[i], scale, rgbcolor='black')
# Plot what is in one but neither other
for i in range(len(S)):
Z = set(S[i])
for j in range(1,len(S)):
Z = Z.difference(S[(i+j)%3])
G += text(f(Z,braces=False), (1.5*centers[i][0],1.7*centers[i][1]), rgbcolor='black')
# Plot pairs of intersections
for i in range(len(S)):
Z = (set(S[i]) & S[(i+1)%3]) - set(XYZ)
C = (1.3*cos(i*2*pi/3 + pi/3), 1.3*sin(i*2*pi/3 + pi/3))
G += text(f(Z,braces=False), C, rgbcolor='black')
# Plot intersection of all three
G += text(f(XYZ,braces=False), (0,0), rgbcolor='black')
# Show it
G.show(aspect_ratio=1, axes=False)
###Output
_____no_output_____
###Markdown
Create and manipulate sets in SageMath. Example 0: Lists before SetsA `list` is a sequential collection that we will revisit in detail soon. For now, we just need to know that we can create a list by using delimiter `,` between items and by wrapping with left and right square brackets: `[` and `]`. For example, the following is a list of 4 integers:
###Code
[1,2,3,4]
myList = [1,2,3,4] # we can assign the list to a variable myList
print(myList) # print myList
type(myList) # and ask for its type
###Output
[1, 2, 3, 4]
###Markdown
List is one of the most primitive data structures and has a long history in a popular computer programming language called LISP - originally created as a practical mathematical notation for computer programs.For now, we just use lists to create sets. Example 1: Making setsIn SageMath, you do have to specifically say that you want a set when you make it.
###Code
X = set([1, 2, 3, 4]) # make the set X={1,2,3,4} from the List [1,2,3,4]
X # disclose X
type(X) # what is the type of X
###Output
_____no_output_____
###Markdown
This is a specialized datatype in Python and more details can be found in Python docs: [https://docs.python.org/2/library/datatypes.html](https://docs.python.org/2/library/datatypes.html)
###Code
4 in X # 'is 4 in X?'
5 in X # 'is 5 in X?'
Y = set([1, 2]) # make the set Y={1,2}
Y # disclose Y
4 not in Y # 'is 4 not in Y?'
1 not in Y # 'is 1 not in Y?'
###Output
_____no_output_____
###Markdown
We can add new elements to a set.
###Code
X.add(5) # add 5 to the set X
X
###Output
_____no_output_____
###Markdown
But remember from the mathematical exposition above that sets contain distinct elements.
###Code
X.add(1) # try adding another 1 to the set X
X
###Output
_____no_output_____
###Markdown
You tryTry making the set $Z=\{4,5,6,7\}$ next. The instructions are in the two cells below.
###Code
# Write in the expression to make set Z ={4, 5, 6, 7}
# (press ENTER at the end of this line to get a new line)
Z = set([4,5,6,7]) #([4],[5],[6,7]))
# Check if 4 is in Z
4 in Z
# (press ENTER at the end of this line to get a new line)
###Output
_____no_output_____
###Markdown
Make a set with the value 2/5 (as a rational) in it. Try adding 0.4 (as a floating point number) to the set. Does SageMath do what you expect? Example 2: SubsetsIn lectures we talked about subsets of sets.Recall that `Y` is a subset of `X` if each element in `Y` is also in `X`.
###Code
print "X is", X
print "Y is", Y
print "Is Y a subset of X?"
Y <= X # 'is Y a subset of X?'
###Output
X is set([1, 2, 3, 4])
Y is set([1, 2])
Is Y a subset of X?
###Markdown
If you have time: We say Y is a proper subset of X if all the elements in Y are also in X but there is at least one element in X that it is not in Y. If X is a (proper) subset of Y, then we also say that Y is a (proper) superset of X.
###Code
X < X # 'is X a proper subset of itself?'
X > Y # 'is X a proper superset of Y?'
X > X # 'is X a proper superset of itself?'
X >= Y # 'is X a superset of Y?' is the same as 'is Y a subset of X?'
###Output
_____no_output_____
###Markdown
Example 3: More set operationsNow let's have a look at the other set operations we talked about above: intersection, union, and difference.Recall that the intersection of X and Y is the set of elements that are in both X and Y.
###Code
X & Y # '&' is the intersection operator
###Output
_____no_output_____
###Markdown
The union of X and Y is the set of elements that are in either X or Y.
###Code
X | Y # '|' is the union operator
###Output
_____no_output_____
###Markdown
The set difference between X and Y is the set of elements in X that are not in Y.
###Code
X - Y # '-' is the set difference operator
###Output
_____no_output_____
###Markdown
You tryTry some more work with sets of strings below.
###Code
fruit = set(['orange', 'banana', 'apple'])
fruit
colours = set(['red', 'green', 'blue', 'orange'])
colours
###Output
_____no_output_____
###Markdown
Fruit and colours are different to us as people, but to the computer, the string 'orange' is just the string 'orange' whether it is in a set called fruit or a set called colours.
###Code
print "fruit intersection colours is", fruit & colours
print "fruit union colours is", fruit | colours
print "fruit - colours is", fruit - colours
print "colours - fruit is", colours - fruit
###Output
fruit intersection colours is set(['orange'])
fruit union colours is set(['blue', 'green', 'apple', 'orange', 'banana', 'red'])
fruit - colours is set(['banana', 'apple'])
colours - fruit is set(['blue', 'green', 'red'])
###Markdown
Try a few other simple subset examples - make up your own sets and try some intersections, unions, and set difference operations. The best way to try new possible operations on a set such as X we just created is to type a period after X and hit `` key. THis will bring up all the possible methods you can call on the set X.
###Code
mySet = set([1,2,3,4,5,6,7,8,9])
mySet. # try placing the cursor after the dot and hit <TAB> key
?mySet.add # you can get help on a method by prepending a question mark
###Output
_____no_output_____
###Markdown
Infact, there are two ways to make sets in SageMath. We have so far used [the python set](https://docs.python.org/2/library/sets.html) to make a set. However we can use the SageMath `Set` to maka sets too. SageMath `Set` is more mathematically consisitent. If you are interested in the SageMath `Set` go to the source and work through the [SageMath reference on Sets](http://doc.sagemath.org/html/en/reference/sets/sage/sets/set.html). But, first let us appreciate the difference between Python `set` and SageMath `Set`!
###Code
X = set([1, 2, 3, 4]) # make the set X={1,2,3,4} with python set
X # disclose X
type(X) # this is the set in python
anotherX = Set([1, 2, 3, 4]) # make the set anotherX={1,2,3,4} in SAGE Set
anotherX.
type(anotherX) # this is the set in SAGE and is more mathy
###Output
_____no_output_____
###Markdown
Example 4Python also provides something called a [frozenset](https://docs.python.org/2/library/stdtypes.htmlfrozenset), which you can't change like an ordinary set.
###Code
aFrozenSet = frozenset([2/5, 0.2, 1/7, 0.1])
aFrozenSet
#aFrozenSet.add(0.3)
###Output
_____no_output_____ |
Finding_Best_Biryani_Point_In__Bangalore.ipynb | ###Markdown
Table of contents* [Introduction: Business Problem](introduction)* [Data](data)* [Methodology](methodology)* [Analysis](analysis)* [Results and Discussion](results)* [Conclusion](conclusion) Introduction: Business Problem In this project, I am trying to find best reataurant to eat chicken biryani for a person who came to Banglore for the first time and staying in a Hotel within 5 km range from his location. This project is also targeting the people those want to know the best biryani point in their areas.As there may be many restaurants within 5 km range so it's difficult to say which one is serving best biryani.To solve this problem I will use the magic of Data Science. Data To solve this problem I required following data about restaurants within 5 km range from the user's location :* wheather the restaurant serves biryani or not.* rating of restaurant from online plateforms.* coordinate's location of restaurant to showing the path on map to the restaurant from user's location.* user's coordinates location.* travel time to restaurants from user's location.* travel distance betweent restaurant and user's location.Following data sources will be needed to extract/generate the required information:* **geopy libray** to get coordinate's location of User.* **foursqaue API** to explorer user's location to get list of nearby restaurants.* **zomato API** to get ratings of restaurants. (I am using zomato api for ratings because zomato is more used here in Hyderabad than foursquare)* **bing map API** to get driving path locations.**Note*** I will only select first 100 restaurants with in 5 km range for this project.* I will use manual searching for getting whether a restaurant serves biryani or not because these restaurants's food menu data is not available in zomato api, foursquare api and uber eat api so I have no option.
###Code
# installing required libraries
!pip install geopy # Installing geopy library, this library helps in getting Latitude and Longitude of a given address.
!pip install folium # map visualizing library.
#avoiding warnigs
pd.options.mode.chained_assignment = None # avoiding setting with copy warning
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
#importing required libraries
import pandas as pd
import numpy as np
import requests
import folium
from geopy.geocoders import Nominatim # Nominatim converts an address into latitude and longitude values.
#Getting Location's Coordinates of Bangalore City.......
address = 'Bangalore'
geolocator = Nominatim(user_agent='Bangalore_explorer')
Bng_location = geolocator.geocode(address)
Bng_latitude = Bng_location.latitude
Bng_longitude = Bng_location.longitude
print('latitide=',Bng_latitude,' longitude=',Bng_longitude)
#Getting Hotel location where user stay...
# I am considering that for this project, user stay in The Park Bangalore
Hotel_address = 'The Park Bangalore'
B_location = geolocator.geocode(Hotel_address)
B_lat = B_location.latitude
B_long = B_location.longitude
print('latitude=',B_lat,' longitude=',B_long)
# defining forsquare credentials to get list of 100 restaurants near by user's location.
LIMIT = 100 # no. of restaurants.
radius = 6000 # defing range 6 KM, I am considering range+1.
CLIENT_ID = 'enter your id' # my Foursquare ID
CLIENT_SECRET = 'enter your id' # my Foursquare Secret
VERSION = '20180605' # Foursquare API version
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
H_lat,
H_long,
radius,
LIMIT)
url
# API calling to get data in Json format
result = requests.get(url).json()
result
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
restaurants = result['response']['groups'][0]['items']
nearby_restaurants = json_normalize(restaurants) # flatten JSON
# filter columns
filtered_columns = ['venue.name','venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_restaurants = nearby_restaurants.loc[:, filtered_columns]
# filter the category for each row
nearby_restaurants['venue.categories'] = nearby_restaurants.apply(get_category_type, axis=1)
# clean columns
nearby_restaurants.columns = [col.split(".")[-1] for col in nearby_restaurants.columns]
nearby_restaurants.head()
# getting ratings of restaurants from zomato
rating=[] #list to store rating
zomato_key = {'user-key':'enter your key'} # zomato api key
count = 1 # counter
print('Processing restaurant no: ')
for name,lat,lng in zip(nearby_restaurants['name'],nearby_restaurants['lat'],nearby_restaurants['lng']):
print(count,end=" ")
url = ('https://developers.zomato.com/api/v2.1/search?q={}&start=0&count=1&lat={}&lon={}').format(name,lat,lng)
result = requests.get(url, headers = zomato_key)
if(result.status_code == 200):
try:
result = result.json()
rating.append(result['restaurants'][0]['restaurant']['user_rating']['aggregate_rating'])
except:
rating.append('NA')
else:
rating.append('NA')
count+=1
print('\n restaurants ratings are : ')
print(rating)
# adding ratings
nearby_restaurants['zomato rating'] = rating
nearby_restaurants.head()
# checking restaurants who have no ratings
nearby_restaurants[nearby_restaurants['zomato rating']=='NA'].index
# droping the restaurants from list
nearby_restaurants.drop([19, 45, 66],axis=0,inplace=True)
nearby_restaurants.head()
# getting travel distance and travel time
from datetime import datetime
start_time = datetime.now(tz=None)
time=[] # time unit: minutes
distance=[] # distance unit: km
key = 'enter your key' # Bing Map Api Key.
count = 1 # counter.
print('Processing restaurant no: ')
for name,lat,lng in zip(nearby_restaurants['name'],nearby_restaurants['lat'],nearby_restaurants['lng']):
url = 'https://dev.virtualearth.net/REST/v1/Routes/DistanceMatrix?origins={},{}&destinations={},{}&travelMode=driving&startTime={}&key={}'.format(
H_lat,
H_long,
lat,
lng,
start_time,
key)
print(count,end=" ")
result = requests.get(url).json()
time.append(result['resourceSets'][0]['resources'][0]['results'][0]['travelDuration'])
distance.append(result['resourceSets'][0]['resources'][0]['results'][0]['travelDistance'])
count+=1
print('\n time list : ')
print(time)
print('\n distance list')
print(distance)
# adding time, distance and serving biryani data to our dataframe
nearby_restaurants['travel time minutes'] = time
nearby_restaurants['travel distance km'] = distance
serving_biryani = [1,0,0,1,1,0,1,0,0,1,1,0,0,1,
1,0,0,1,1,0,1,0,1,0,0,0,1,1,
0,1,1,0,0,1,0,0,0,0,0,0,0,0,
0,1,0,1,0,0,0,0,0,0,0,0,0,1,
0,0,1,1,1,0,0,0,0,0,0,0,0,0,
1,0,0,0,0,1,1,0,0,0,0,0,0,1,
0,0,0,0,0,0,0,0,0,0,0,0,0,] # menually generated list from online searching -> 1: yes, 0: no
nearby_restaurants['serves biryani'] = serving_biryani
nearby_restaurants.head()
# getting those restaurants who serves biryani
biryani_points = nearby_restaurants[nearby_restaurants['serves biryani']==1].reset_index(drop=True)
biryani_points.head()
biryani_points.shape
###Output
_____no_output_____
###Markdown
There are 27 restaurants who serves biryani near by user's location Our Data aquisition and data preprocessing has completed now next some visulizations. Bangalore City Map
###Code
import folium
Bng_map = folium.Map( location=[Bng_latitude,Bng_longitude], zoom_start=12)
Bng_map
###Output
_____no_output_____
###Markdown
Get user's location on map
###Code
Bng_map = folium.Map(location=[B_lat,B_long],zoom_start=20)
folium.Marker(location=[B_lat,B_long],tooltip='The Park Bangalore').add_to(Bng_map)
Bng_map
###Output
_____no_output_____
###Markdown
Showing all restaurants on map those serve biryani
###Code
Bng_map = folium.Map(location=[B_lat,B_long],zoom_start=13)
folium.Marker(location=[B_lat,B_long],tooltip='The Park Bangalore').add_to(Bng_map)
for name,lat,lng in zip(biryani_points['name'],biryani_points['lat'],biryani_points['lng']):
folium.CircleMarker(
location=[lat,lng],
tooltip=name,
color='blue',
radius=6,
fill=True,
fill_color='red',
fill_opacity=0.8).add_to(Bng_map)
Bng_map
###Output
_____no_output_____
###Markdown
Methodology To find the best biryani serving restaurant near the user's location, First I have to check those restaurants with highest rating so need to sort our data in descending order according to rating to get highest rating restaurants on top of the list then I will consider only those restaurants having high value rating then after I will sort restaurants list according to travel time in ascending order to get a restaurant where user can reach in short time and for result I will select the first restaurant in the list after doing all anaysis.To solve this problem I don't require any machine learning algorithm.
###Code
biryani_points['zomato rating'] = biryani_points['zomato rating'].astype(float)
# Sorting data Frame according to zomato ratings in descending order.
biryani_points.sort_values(by='zomato rating',ascending=False,inplace=True)
biryani_points.reset_index(drop=True,inplace=True)
biryani_points.head()
# Getting list of highest rating restaurants.
top_points = biryani_points[biryani_points['zomato rating']==biryani_points['zomato rating'][0]]
top_points
###Output
_____no_output_____
###Markdown
Next step is for the situation if I get more than one restaurant with rating 4.7 so there is need to choose one and for this I am choosing that restaurant where user can reach in short time.
###Code
# sorting the dataframe according to travel time in ascending order..
top_points.sort_values(by=['travel time minutes'],inplace=True) # sorting list according to time.
top_points.reset_index(drop=True, inplace=True)
top_points = top_points[top_points['travel distance km']<=5].reset_index(drop=True) # considering only restaurants those are in 5 km range.
top_points
###Output
_____no_output_____
###Markdown
Let's do some analysis now.. Analysis **Doing some additional analysis on fetched data..** **Let's check how many different catagories of restaurants are near by the user's location**
###Code
len(nearby_restaurants['categories'].unique())
###Output
_____no_output_____
###Markdown
There are 47 different catagories of restaurants. **Let's check how many restaurants are there in each catagory..**
###Code
category = nearby_restaurants['categories'].unique().tolist()
category[0:10]
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure(figsize=(10,20))
ax = sns.countplot(y='categories', data=nearby_restaurants)
y_count=0.1
for p in ax.patches:
ax.annotate(str(p.get_width()), (p.get_width()+0.05,y_count),color='blue')
y_count+=1
plt.title('No. of restaurants in each category',size=15)
plt.show()
###Output
_____no_output_____
###Markdown
Mostly there are one or two restaurant in each catagory except Indian Restaurants, South Indian, cafe and Bakery. **Let's check different catagories of restaurants those serve biryani**
###Code
biryani_points.categories.unique()
plt.figure(figsize=(10,10))
ax = sns.countplot(y='categories', data=biryani_points)
y_count=0.1
for p in ax.patches:
ax.annotate(str(p.get_width()), (p.get_width()+0.05,y_count),color='blue')
y_count+=1
plt.title('No. of restaurants in each category those sereve biryani',size=15)
plt.show()
###Output
_____no_output_____
###Markdown
**Let's look out on ratings**
###Code
nearby_restaurants['zomato rating'] = nearby_restaurants['zomato rating'].astype(float)
plt.figure(figsize=(20,10))
ax = sns.countplot(x='zomato rating', data=nearby_restaurants)
for p in ax.patches:
ax.annotate(str(p.get_height()), (p.get_x()+0.3,p.get_height()+0.1),color='blue')
plt.title('No. restaurants in each uniue rating value', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
Mostly restaurants have ratings between 4 and 4.6. There is only one restaurant with rating of 4.8 **Let's checkout those restaurant's ratings who serve biryani**
###Code
plt.figure(figsize=(15,10))
ax = sns.countplot(x='zomato rating', data=biryani_points)
for p in ax.patches:
ax.annotate(str(p.get_height()), (p.get_x()+0.3,p.get_height()+0.1),color='blue')
plt.title('No. of restaurants in each unique rating value those serve biryani', size=15)
plt.show()
###Output
_____no_output_____
###Markdown
There is only one restaurant with rating of 4.6... Results and Discussion In methodology section I have already computed the list of highest rating restaurants those serves biryani so for the result it's better to select the restaurant having highest rating as well as less time is require to travel from user's location to restaurant.
###Code
print('best biryani points with in 5 km is ',top_points.iloc[0]['name'])
print('Distance =',top_points.iloc[0]['travel distance km'],'KM')
print('Travel Time =',round(top_points.iloc[0]['travel time minutes']),'Minutes')
# getting coordinate's location of best biryani point
BP_lat = top_points.iloc[0]['lat']
BP_lng = top_points.iloc[0]['lng']
print('latitude =',BP_lat,' longitude =',BP_lng)
# generating url to fectch path data from user's location to retaurant'S location
bing_key = 'enter your key'
url='http://dev.virtualearth.net/REST/V1/Routes/Driving?wp.0={},{}&wp.1={},{}&optmz=distance&routeAttributes=routePath&key={}'.format(H_lat,
H_long,
BP_lat,
BP_lng,
bing_key)
print(url)
# fetching data
result = requests.get(url).json()
result
# getting coordinate points to draw path on map..
points = result['resourceSets'][0]['resources'][0]['routePath']['line']['coordinates']
points
# drwaing path on map
Bng_map = folium.Map(location=[(B_lat+BP_lat)/2,(B_long+BP_lng)/2],zoom_start=16)
folium.Marker(location=[B_lat,B_long],icon=folium.Icon(color='green',icon='fas fa-h-square'),tooltip='The Park Bangalore').add_to(Bng_map)
folium.Marker(location=[BP_lat,BP_lng],icon=folium.Icon(color='red',icon='fas fa-h-square'),tooltip=top_points.iloc[0]['name']).add_to(Bng_map)
distance = top_points.iloc[0]['travel distance km']
time = round(top_points.iloc[0]['travel time minutes'])
folium.PolyLine(points,tooltip='distance = '+str(distance)+' KM and time = '+str(time)+' Minutes').add_to(Bng_map)
Bng_map
###Output
_____no_output_____ |
notebooks/Telco-customer-churn-ICP4D.ipynb | ###Markdown
Predicting Telco Customer Churn using SparkML on IBM Cloud Pak for Data (ICP4D) We'll use this notebook to create a machine learning model to predict customer churn. In this notebook we will build the prediction model using the SparkML library.This notebook walks you through these steps:- Load and Visualize data set.- Build a predictive model with SparkML API- Save the model in the ML repository 1.0 Install required packagesThere are a couple of Python packages we will use in this notebook. First we make sure the Watson Machine Learning client v3 is removed (its not installed by default) and then install/upgrade the v4 version of the client (this package is installed by default on CP4D).WML Client: https://wml-api-pyclient-dev-v4.mybluemix.net/repository
###Code
!pip uninstall watson-machine-learning-client -y
!pip install --user watson-machine-learning-client-v4==1.0.103 --upgrade | tail -n 1
!pip install --user pyspark==2.3.3 --upgrade|tail -n 1
!pip install --user scikit-learn==0.20.3 --upgrade|tail -n 1
import pandas as pd
import numpy as np
import json
import os
# Import the Project Library to read/write project assets
from project_lib import Project
project = Project.access()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
2.0 Load and Clean dataWe'll load our data as a pandas data frame.**>*** Highlight the cell below by clicking it.* Click the `10/01` "Find data" icon in the upper right of the notebook.* If you are using Virtualized data, begin by choosing the `Files` tab. Then choose your virtualized data (i.e. MYSCHEMA.BILLINGPRODUCTCUSTOMERS), click `Insert to code` and choose `Insert Pandas DataFrame`.* If you are using this notebook without virtualized data, add the locally uploaded file `Telco-Customer-Churn.csv` by choosing the `Files` tab. Then choose the `Telco-Customer-Churn.csv`. Click `Insert to code` and choose `Insert Pandas DataFrame`.* The code to bring the data into the notebook environment and create a Pandas DataFrame will be added to the cell below.* Run the cell
###Code
# Place cursor below and insert the Pandas DataFrame for the Telco churn data
###Output
_____no_output_____
###Markdown
We'll use the Pandas naming convention df for our DataFrame. Make sure that the cell below uses the name for the dataframe used above. For the locally uploaded file it should look like df_data_1 or df_data_2 or df_data_x. For the virtualized data case it should look like data_df_1 or data_df_2 or data_df_x.**>**
###Code
# for virtualized data
# df = data_df_1
# for local upload
df = df_data_1
###Output
_____no_output_____
###Markdown
2.1 Drop CustomerID feature (column)
###Code
df = df.drop('customerID', axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
2.2 Examine the data types of the features
###Code
df.info()
# Statistics for the columns (features). Set it to all, since default is to describe just the numeric features.
df.describe(include = 'all')
###Output
_____no_output_____
###Markdown
We see that Tenure ranges from 0 (new customer) to 6 years, Monthly charges range from $18 to $118, etc 2.3 Check for need to Convert TotalCharges column to numeric if it is detected as objectIf the above `df.info` shows the "TotalCharges" columnn as an object, we'll need to convert it to numeric. If you have already done this during a previous exercise for "Data Visualization with Data Refinery", you can skip to step `2.4`.
###Code
totalCharges = df.columns.get_loc("TotalCharges")
new_col = pd.to_numeric(df.iloc[:, totalCharges], errors='coerce')
df.iloc[:, totalCharges] = pd.Series(new_col)
# Statistics for the columns (features). Set it to all, since default is to describe just the numeric features.
df.describe(include = 'all')
###Output
_____no_output_____
###Markdown
We now see statistics for the `TotalCharges` feature. 2.4 Any NaN values should be removed to create a more accurate model.
###Code
# Check if we have any NaN values and see which features have missing values that should be addressed
print(df.isnull().values.any())
df.isnull().sum()
###Output
_____no_output_____
###Markdown
We should see that the `TotalCharges` column has missing values. There are various ways we can address this issue:- Drop records with missing values - Fill in the missing value with one of the following strategies: Zero, Mean of the values for the column, Random value, etc).
###Code
# Handle missing values for nan_column (TotalCharges)
from sklearn.impute import SimpleImputer
# Find the column number for TotalCharges (starting at 0).
total_charges_idx = df.columns.get_loc("TotalCharges")
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
df.iloc[:, total_charges_idx] = imputer.fit_transform(df.iloc[:, total_charges_idx].values.reshape(-1, 1))
df.iloc[:, total_charges_idx] = pd.Series(df.iloc[:, total_charges_idx])
# Validate that we have addressed any NaN values
print(df.isnull().values.any())
df.isnull().sum()
###Output
_____no_output_____
###Markdown
2.5 Categorize FeaturesWe will categorize some of the columns / features based on wether they are categorical values or continuous (i.e numerical) values. We will use this in later sections to build visualizations.
###Code
columns_idx = np.s_[0:] # Slice of first row(header) with all columns.
first_record_idx = np.s_[0] # Index of first record
string_fields = [type(fld) is str for fld in df.iloc[first_record_idx, columns_idx]] # All string fields
all_features = [x for x in df.columns if x != 'Churn']
categorical_columns = list(np.array(df.columns)[columns_idx][string_fields])
categorical_features = [x for x in categorical_columns if x != 'Churn']
continuous_features = [x for x in all_features if x not in categorical_features]
#print('All Features: ', all_features)
#print('\nCategorical Features: ', categorical_features)
#print('\nContinuous Features: ', continuous_features)
#print('\nAll Categorical Columns: ', categorical_columns)
###Output
_____no_output_____
###Markdown
2.6 Visualize dataData visualization can be used to find patterns, detect outliers, understand distribution and more. We can use graphs such as:- Histograms, boxplots, etc: To find distribution / spread of our continuous variables.- Bar charts: To show frequency in categorical values.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
%matplotlib inline
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
###Output
_____no_output_____
###Markdown
First, we get a high level view of the distribution of `Churn`. What percentage of customer in our dataset are churning vs not churning.
###Code
print(df.groupby(['Churn']).size())
churn_plot = sns.countplot(data=df, x='Churn', order=df.Churn.value_counts().index)
plt.ylabel('Count')
for p in churn_plot.patches:
height = p.get_height()
churn_plot.text(p.get_x()+p.get_width()/2., height + 1,'{0:.0%}'.format(height/float(len(df))),ha="center")
plt.show()
###Output
_____no_output_____
###Markdown
We can get use frequency counts charts to get an understanding of the categorical features relative to `Churn` - We can see that for the `gender` feature. We have relatively equal rates of churn by `gender`- We can see that for the `InternetService` feature. We have higher churn for those that have "Fiber optic" service versus those with "DSL"
###Code
# Categorical feature count plots
f, ((ax1, ax2, ax3), (ax4, ax5, ax6), (ax7, ax8, ax9), (ax10, ax11, ax12), (ax13, ax14, ax15)) = plt.subplots(5, 3, figsize=(20, 20))
ax = [ax1, ax2, ax3, ax4, ax5, ax6, ax7, ax8, ax9, ax10, ax11, ax12, ax13, ax14, ax15 ]
for i in range(len(categorical_features)):
sns.countplot(x = categorical_features[i], hue="Churn", data=df, ax=ax[i])
###Output
_____no_output_____
###Markdown
We can get use histrogram charts to get an understanding of the distribution of our continuous / numerical features relative to Churn.- We can see that for the `MonthlyCharges` feature, customers that churn tend to pay higher monthly fees than those that stay.- We can see that for the `tenure` feature, customers that churn tend to be relatively new customers.
###Code
# Continuous feature histograms.
fig, ax = plt.subplots(2, 2, figsize=(28, 8))
df[df.Churn == 'No'][continuous_features].hist(bins=20, color="blue", alpha=0.5, ax=ax)
df[df.Churn == 'Yes'][continuous_features].hist(bins=20, color="orange", alpha=0.5, ax=ax)
# Or use displots
#sns.set_palette("hls", 3)
#f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(25, 25))
#ax = [ax1, ax2, ax3, ax4]
#for i in range(len(continuous_features)):
# sns.distplot(df[continuous_features[i]], bins=20, hist=True, ax=ax[i])
# Create Grid for pairwise relationships
gr = sns.PairGrid(df, height=5, hue="Churn")
gr = gr.map_diag(plt.hist)
gr = gr.map_offdiag(plt.scatter)
gr = gr.add_legend()
# Plot boxplots of numerical columns. More variation in the boxplot implies higher significance.
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(25, 25))
ax = [ax1, ax2, ax3, ax4]
for i in range(len(continuous_features)):
sns.boxplot(x = 'Churn', y = continuous_features[i], data=df, ax=ax[i])
###Output
_____no_output_____
###Markdown
3.0 Create a modelNow we can create our machine learning model. You could use the insights / intuition gained from the data visualization steps above to what kind of model to create or which features to use. We will create a simple classification model.
###Code
from pyspark.sql import SparkSession
import pandas as pd
import json
spark = SparkSession.builder.getOrCreate()
df_data = spark.createDataFrame(df)
df_data.head()
###Output
_____no_output_____
###Markdown
3.1 Split the data into training and test sets
###Code
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
print("Number of records for training: " + str(train_data.count()))
print("Number of records for evaluation: " + str(test_data.count()))
###Output
_____no_output_____
###Markdown
3.2 Examine the Spark DataFrame SchemaLook at the data types to determine requirements for feature engineering
###Code
spark_df.printSchema()
###Output
_____no_output_____
###Markdown
3.3 Use StringIndexer to encode a string column of labels to a column of label indicesWe are using the Pipeline package to build the development steps as pipeline. We are using StringIndexer to handle categorical / string features from the dataset. StringIndexer encodes a string column of labels to a column of label indicesWe then use VectorAssembler to asemble these features into a vector. Pipelines API requires that input variables are passed in a vector
###Code
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
si_gender = StringIndexer(inputCol = 'gender', outputCol = 'gender_IX')
si_Partner = StringIndexer(inputCol = 'Partner', outputCol = 'Partner_IX')
si_Dependents = StringIndexer(inputCol = 'Dependents', outputCol = 'Dependents_IX')
si_PhoneService = StringIndexer(inputCol = 'PhoneService', outputCol = 'PhoneService_IX')
si_MultipleLines = StringIndexer(inputCol = 'MultipleLines', outputCol = 'MultipleLines_IX')
si_InternetService = StringIndexer(inputCol = 'InternetService', outputCol = 'InternetService_IX')
si_OnlineSecurity = StringIndexer(inputCol = 'OnlineSecurity', outputCol = 'OnlineSecurity_IX')
si_OnlineBackup = StringIndexer(inputCol = 'OnlineBackup', outputCol = 'OnlineBackup_IX')
si_DeviceProtection = StringIndexer(inputCol = 'DeviceProtection', outputCol = 'DeviceProtection_IX')
si_TechSupport = StringIndexer(inputCol = 'TechSupport', outputCol = 'TechSupport_IX')
si_StreamingTV = StringIndexer(inputCol = 'StreamingTV', outputCol = 'StreamingTV_IX')
si_StreamingMovies = StringIndexer(inputCol = 'StreamingMovies', outputCol = 'StreamingMovies_IX')
si_Contract = StringIndexer(inputCol = 'Contract', outputCol = 'Contract_IX')
si_PaperlessBilling = StringIndexer(inputCol = 'PaperlessBilling', outputCol = 'PaperlessBilling_IX')
si_PaymentMethod = StringIndexer(inputCol = 'PaymentMethod', outputCol = 'PaymentMethod_IX')
si_Label = StringIndexer(inputCol="Churn", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_Label.labels)
###Output
_____no_output_____
###Markdown
3.4 Create a single vector
###Code
va_features = VectorAssembler(inputCols=['gender_IX', 'SeniorCitizen', 'Partner_IX', 'Dependents_IX', 'PhoneService_IX', 'MultipleLines_IX', 'InternetService_IX', \
'OnlineSecurity_IX', 'OnlineBackup_IX', 'DeviceProtection_IX', 'TechSupport_IX', 'StreamingTV_IX', 'StreamingMovies_IX', \
'Contract_IX', 'PaperlessBilling_IX', 'PaymentMethod_IX', 'TotalCharges', 'MonthlyCharges'], outputCol="features")
###Output
_____no_output_____
###Markdown
3.5 Create a pipeline, and fit a model using RandomForestClassifier Assemble all the stages into a pipeline. We don't expect a clean linear regression, so we'll use RandomForestClassifier to find the best decision tree for the data.
###Code
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages=[si_gender, si_Partner, si_Dependents, si_PhoneService, si_MultipleLines, si_InternetService, si_OnlineSecurity, si_OnlineBackup, si_DeviceProtection, \
si_TechSupport, si_StreamingTV, si_StreamingMovies, si_Contract, si_PaperlessBilling, si_PaymentMethod, si_Label, va_features, \
classifier, label_converter])
model = pipeline.fit(train_data)
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction")
area_under_curve = evaluatorDT.evaluate(predictions)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderROC')
area_under_curve = evaluatorDT.evaluate(predictions)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction", metricName='areaUnderPR')
area_under_PR = evaluatorDT.evaluate(predictions)
print("areaUnderROC = %g" % area_under_curve)
###Output
_____no_output_____
###Markdown
4.0 Save the model and test dataNow the model can be saved for future deployment. The model will be saved using the Watson Machine Learning client, to a deployment space.**>****>**
###Code
MODEL_NAME = "INSERT-YOUR-MODEL-NAME-HERE"
DEPLOYMENT_SPACE_NAME = 'INSERT-YOUR-DEPLOYMENT-SPACE-NAME-HERE'
###Output
_____no_output_____
###Markdown
4.1 Save the model to ICP4D local Watson Machine LearningReplace the `username` and `password` values of `*****` with your Cloud Pak for Data `username` and `password`. The value for `url` should match the `url` for your Cloud Pak for Data cluster.
###Code
from watson_machine_learning_client import WatsonMachineLearningAPIClient
wml_credentials = {
"url": "******",
"username": "*****",
"password" : "*****",
"instance_id": "wml_local",
"version" : "2.5.0"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
client.spaces.list()
###Output
_____no_output_____
###Markdown
Use the desired space as the `default_space`The deployment space ID will be looked up based on the name specified above. If you do not receive a space GUID as an output to the next cell, do not proceed until you have created a deployment space.
###Code
# Be sure to update the name of the space with the one you want to use.
client.spaces.list()
all_spaces = client.spaces.get_details()['resources']
space_id = None
for space in all_spaces:
if space['entity']['name'] == DEPLOYMENT_SPACE_NAME:
space_id = space["metadata"]["guid"]
print("\nDeployment Space GUID: ", space_id)
if space_id is None:
print("WARNING: Your space does not exist. Create a deployment space before proceeding to the next cell.")
#space_id = client.spaces.store(meta_props={client.spaces.ConfigurationMetaNames.NAME: space_name})["metadata"]["guid"]
###Output
_____no_output_____
###Markdown
**client.set.default_space("6b39c537-f707-4078-9dc7-ce70b70ab22f") >>**
###Code
# Now set the default space to the GUID for your deployment space. If this is successful, you will see a 'SUCCESS' message.
client.set.default_space(space_id)
###Output
_____no_output_____
###Markdown
Save the Model
###Code
# Store our model
model_props = {client.repository.ModelMetaNames.NAME: MODEL_NAME,
client.repository.ModelMetaNames.RUNTIME_UID : "spark-mllib_2.3",
client.repository.ModelMetaNames.TYPE : "mllib_2.3"}
published_model = client.repository.store_model(model=model, pipeline=pipeline, meta_props=model_props, training_data=train_data)
print(json.dumps(published_model, indent=3))
# Use this cell to do any cleanup of previously created models and deployments
client.repository.list_models()
client.deployments.list()
# client.repository.delete('GUID of stored model')
# client.deployments.delete('GUID of deployed model')
###Output
_____no_output_____
###Markdown
5.0 Save Test DataWe will save the test data we used to evaluate the model to our project.
###Code
write_score_CSV=test_data.toPandas().drop(['Churn'], axis=1)
write_score_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLBatchScore.csv', sep=',', index=False)
#project.save_data('TelcoCustomerSparkMLBatchScore.csv', write_score_CSV.to_csv())
write_eval_CSV=test_data.toPandas()
write_eval_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLEval.csv', sep=',', index=False)
#project.save_data('TelcoCustomerSparkMLEval.csv', write_eval_CSV.to_csv())
###Output
_____no_output_____
###Markdown
Telco Customer Churn for ICP4D We'll use this notebook to create a machine learning model to predict customer churn. 1.0 Install required packages
###Code
!pip install --user watson-machine-learning-client --upgrade | tail -n 1
###Output
_____no_output_____
###Markdown
2.0 Load and Clean dataWe'll load our data as a pandas data frame.* Highlight the cell below by clicking it.* Click the `10/01` "Find data" icon in the upper right of the notebook.* If you are using the [ICP4D Learning Path] and have the [Virtualized data], begin by choosing the `Remote` tab. Then choose your virtualized data (i.e. User.billingProductCustomers), click `Insert to code` and choose `Insert Pandas DataFrame`.* If you are using this code pattern outside of the [ICP4D Learning Path], add the locally uploaded file `Telco-Customer-Churn.csv` by choosing the 'Local` tab. Then choose the `Telco-Customer-Churn.csv`, click `Insert to code` and choose `Insert Pandas DataFrame`.* The code to bring the data into the notebook environment and create a Pandas DataFrame will be added to the cell below.* Run the cell
###Code
# Place cursor below and insert the Pandas DataFrame for the Telco churn data
###Output
_____no_output_____
###Markdown
We'll use the Pandas naming convention `df` for our DataFrame. Make sure that the cell below uses the name for the dataframe used above, i.e df1, df2,... dfX for the remote data using Data Virtualization, or df_data1, df_data2, ... if using local data.
###Code
# df = df1
df = df_data_1
###Output
_____no_output_____
###Markdown
2.1 Drop CustomerID feature (column)
###Code
df = df.drop('customerID', axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
2.2 Examine the data types of the features
###Code
df.info()
###Output
_____no_output_____
###Markdown
2.3 Convert TotalCharges column to numeric as it is detected as object
###Code
new_col = pd.to_numeric(df.iloc[:, 18], errors='coerce')
new_col
###Output
_____no_output_____
###Markdown
2.4 Modify our dataframe to reflect the new datatype
###Code
df.iloc[:, 18] = pd.Series(new_col)
df
###Output
_____no_output_____
###Markdown
2.5 Any NaN values should be removed to create a more accurate model. Prior examination shows NaN values for `TotalCharges`
###Code
# Check if we have any NaN values
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Set `nan_column` to the column number for TotalCharges (starting at 0).
###Code
nan_column = df.columns.get_loc("TotalCharges")
print(nan_column)
# Handle missing values for nan_column (TotalCharges)
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values="NaN", strategy="mean")
df.iloc[:, nan_column] = imp.fit_transform(df.iloc[:, nan_column].values.reshape(-1, 1))
df.iloc[:, nan_column] = pd.Series(df.iloc[:, nan_column])
# Check if we have any NaN values
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
2.6 Visualize data
###Code
import json
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing, svm
from itertools import combinations
from sklearn.preprocessing import PolynomialFeatures, LabelEncoder, StandardScaler
import sklearn.feature_selection
from sklearn.model_selection import train_test_split
from collections import defaultdict
from sklearn import metrics
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="tenure", hue="Churn", data=df)
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="Contract", hue="Churn", data=df)
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="TechSupport", hue="Churn", data=df)
# Create Grid for pairwise relationships
gr = sns.PairGrid(df, size=5, hue="Churn")
gr = gr.map_diag(plt.hist)
gr = gr.map_offdiag(plt.scatter)
gr = gr.add_legend()
totalCharge = df.columns.get_loc("TotalCharges")
print(nan_column)
# Set up plot size
fig, ax = plt.subplots(figsize=(6,6))
# Attributes destribution
a = sns.boxplot(orient="v", palette="hls", data=df.iloc[:, totalCharge], fliersize=14)
# Total Charges data distribution
histogram = sns.distplot(df.iloc[:, totalCharge], hist=True)
plt.show()
tenure = df.columns.get_loc("tenure")
print(tenure)
# Tenure data distribution
histogram = sns.distplot(df.iloc[:, tenure], hist=True)
plt.show()
monthly = df.columns.get_loc("MonthlyCharges")
print(monthly)
# Monthly Charges data distribution
histogram = sns.distplot(df.iloc[:, monthly], hist=True)
plt.show()
###Output
_____no_output_____
###Markdown
Understand Data Distribution¶ 3.0 Create a model
###Code
from pyspark.sql import SparkSession
import pandas as pd
import json
spark = SparkSession.builder.getOrCreate()
df_data = spark.createDataFrame(df)
df_data.head()
###Output
_____no_output_____
###Markdown
3.1 Split the data into training and test sets
###Code
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
print("Number of records for training: " + str(train_data.count()))
print("Number of records for evaluation: " + str(test_data.count()))
###Output
_____no_output_____
###Markdown
3.2 Examine the Spark DataFrame SchemaLook at the data types to determine requirements for feature engineering
###Code
spark_df.printSchema()
###Output
_____no_output_____
###Markdown
3.3 Use StringIndexer to encode a string column of labels to a column of label indices
###Code
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
si_gender = StringIndexer(inputCol = 'gender', outputCol = 'gender_IX')
si_Partner = StringIndexer(inputCol = 'Partner', outputCol = 'Partner_IX')
si_Dependents = StringIndexer(inputCol = 'Dependents', outputCol = 'Dependents_IX')
si_PhoneService = StringIndexer(inputCol = 'PhoneService', outputCol = 'PhoneService_IX')
si_MultipleLines = StringIndexer(inputCol = 'MultipleLines', outputCol = 'MultipleLines_IX')
si_InternetService = StringIndexer(inputCol = 'InternetService', outputCol = 'InternetService_IX')
si_OnlineSecurity = StringIndexer(inputCol = 'OnlineSecurity', outputCol = 'OnlineSecurity_IX')
si_OnlineBackup = StringIndexer(inputCol = 'OnlineBackup', outputCol = 'OnlineBackup_IX')
si_DeviceProtection = StringIndexer(inputCol = 'DeviceProtection', outputCol = 'DeviceProtection_IX')
si_TechSupport = StringIndexer(inputCol = 'TechSupport', outputCol = 'TechSupport_IX')
si_StreamingTV = StringIndexer(inputCol = 'StreamingTV', outputCol = 'StreamingTV_IX')
si_StreamingMovies = StringIndexer(inputCol = 'StreamingMovies', outputCol = 'StreamingMovies_IX')
si_Contract = StringIndexer(inputCol = 'Contract', outputCol = 'Contract_IX')
si_PaperlessBilling = StringIndexer(inputCol = 'PaperlessBilling', outputCol = 'PaperlessBilling_IX')
si_PaymentMethod = StringIndexer(inputCol = 'PaymentMethod', outputCol = 'PaymentMethod_IX')
si_Label = StringIndexer(inputCol="Churn", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_Label.labels)
###Output
_____no_output_____
###Markdown
3.4 Create a single vector
###Code
va_features = VectorAssembler(inputCols=['gender_IX', 'SeniorCitizen', 'Partner_IX', 'Dependents_IX', 'PhoneService_IX', 'MultipleLines_IX', 'InternetService_IX', \
'OnlineSecurity_IX', 'OnlineBackup_IX', 'DeviceProtection_IX', 'TechSupport_IX', 'StreamingTV_IX', 'StreamingMovies_IX', \
'Contract_IX', 'PaperlessBilling_IX', 'PaymentMethod_IX', 'TotalCharges', 'MonthlyCharges'], outputCol="features")
###Output
_____no_output_____
###Markdown
3.5 Create a pipeline, and fit a model using RandomForestClassifier Assemble all the stages into a pipeline. We don't expect a clean linear regression, so we'll use RandomForestClassifier to find the best decision tree for the data.
###Code
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages=[si_gender, si_Partner, si_Dependents, si_PhoneService, si_MultipleLines, si_InternetService, si_OnlineSecurity, si_OnlineBackup, si_DeviceProtection, \
si_TechSupport, si_StreamingTV, si_StreamingMovies, si_Contract, si_PaperlessBilling, si_PaymentMethod, si_Label, va_features, \
classifier, label_converter])
model = pipeline.fit(train_data)
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction")
area_under_curve = evaluatorDT.evaluate(predictions)
#default evaluation is areaUnderROC
print("areaUnderROC = %g" % area_under_curve)
###Output
_____no_output_____
###Markdown
4.0 Save the model and test data Add a unique name for MODEL_NAME.
###Code
MODEL_NAME = "myname model"
###Output
_____no_output_____
###Markdown
4.1 Save the model to ICP4D local Watson Machine Learning
###Code
from dsx_ml.ml import save
save(name=MODEL_NAME, model=model, test_data=test_data, algorithm_type='Classification',
description='This is a SparkML Model to Classify Telco Customer Churn Risk')
###Output
_____no_output_____
###Markdown
4.2 Write the test data without label to a .csv so that we can later use it for batch scoring
###Code
write_score_CSV=test_data.toPandas().drop(['Churn'], axis=1)
write_score_CSV.to_csv('../datasets/TelcoCustomerSparkMLBatchScore.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
4.3 Write the test data to a .csv so that we can later use it for evaluation
###Code
write_eval_CSV=test_data.toPandas()
write_eval_CSV.to_csv('../datasets/TelcoCustomerSparkMLEval.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
Telco Customer Churn para ICP4D Usaremos este 'notebook' para crear un modelo de 'machine learning' para predecir el 'CHURN' en los clientes. 1.0 Instalar las paqueterías requeridas
###Code
!pip install --user watson-machine-learning-client --upgrade | tail -n 1
!pip install --user pyspark==2.3.3 --upgrade|tail -n 1
###Output
_____no_output_____
###Markdown
2.0 Cargar y limpiar los datos.Cargamos nuestros datos como un marco de datos (Data Frame) de pandas.* Resalta la celda inferior dándole clic.* Da clic en el ícono `01/00` "Find and add data" en la parte superior derecha de la 'Notebook'.* Si estás usando 'Virtualized data', comienza seleccionando la pestaña 'Files'. Ahora, selecciona tus datos virtualizados (ej. MYSCHEMA.BILLINGPRODUCTCUSTOMERS), da clic en 'Insert to code' y selecciona 'Insert Pandas DataFrame'.* Si estás usando esta 'Notebook' sin datos virtualizados, agrega localmente el archivo: `Telco-Customer-Churn.csv` Seleccionando la pestaña 'Files'. Después, selecciona el archivo: 'Telco-Customer-Churn.csv'. Da clic en 'Insert to code' y selecciona 'Insert Pandas DataFrame'.* El códico para traer los datos a este ambiente de 'Notebook' y crear el marco de datos de Pandas (Pandas DataFrame) será agregado el la celda inferior.* Correr la celda.
###Code
# Coloca el cursor debajo e inserta el marco de datos de pandas(Pandas DataFrame) para los datos de la Telco.
import pandas as pd
###Output
_____no_output_____
###Markdown
Usaremos la convención de nombramiento de Pandas 'df' para nuestro marco de datos (DataFrame). Asegúrate de que la celda inferior use el mismo nombre para el marco de datos(DataFrame) usado en la celda superior. Para el archivo cargado de forma local, debe verse como: 'df_data_1' o 'df_data_2' o 'df_data_x'. Para el caso de los datos virtualizados, debe verse como: data_df_1 o data_df_2 o data_df_x.
###Code
# Para datos virtualizados:
# df = data_df_1
# Para carga local:
df = df_data_1
###Output
_____no_output_____
###Markdown
2.1 Desplegar la característica 'CustomerID' (columna)
###Code
df = df.drop('customerID', axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
2.2 Examinar los tipos de datos de las características (columnas).
###Code
df.info()
###Output
_____no_output_____
###Markdown
2.3 Verificar la necesidad de convertir la columna 'TotalCharges' a numérico si es detectado como objetoSi el 'df.info' superior, muestra la columnda "TotalCharges" c omo un objeto, necesitamos convertirlo a numérico. Si ya lo has hecho durante el ejericio previo para "Visualización de datos con refinería de datos", puedes saltar este paso `2.5`.
###Code
totalCharges = df.columns.get_loc("TotalCharges")
print(totalCharges)
new_col = pd.to_numeric(df.iloc[:, totalCharges], errors='coerce')
new_col
###Output
_____no_output_____
###Markdown
2.4 Modificamos nuestro marco de datos 'Data Frame' para reflejar el nuevo tipo de dato
###Code
df.iloc[:, totalCharges] = pd.Series(new_col)
df
###Output
_____no_output_____
###Markdown
2.5 Los valores NaN deben ser removidos para crear un modelo más certero.
###Code
# Revisar si tenemos valores 'NaN' ('Not a Number').
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Configura la columna 'nan_column' al numero de columna de 'TotalCharges' (comenzando en 0).
###Code
nan_column = df.columns.get_loc("TotalCharges")
print(nan_column)
# Maneja los valores faltantes para la columna 'nan_column' de 'TotalCharges'
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values="NaN", strategy="mean")
df.iloc[:, nan_column] = imp.fit_transform(df.iloc[:, nan_column].values.reshape(-1, 1))
df.iloc[:, nan_column] = pd.Series(df.iloc[:, nan_column])
# Verificar si tenemos valores 'NaN'.
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
2.6 Visualizar lod datos
###Code
import json
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing, svm
from itertools import combinations
from sklearn.preprocessing import PolynomialFeatures, LabelEncoder, StandardScaler
import sklearn.feature_selection
from sklearn.model_selection import train_test_split
from collections import defaultdict
from sklearn import metrics
# Conteo de frecuencia en la permanencia de la trama (plot tenure).
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="tenure", hue="Churn", data=df)
# Conteo de frecuencia en la permanencia de la trama (plot tenure).
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="Contract", hue="Churn", data=df)
# Conteo de frecuencia en la permanencia de la trama (plot tenure).
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="TechSupport", hue="Churn", data=df)
# Crear la cuadrícula para relaciones en pares.
gr = sns.PairGrid(df, size=5, hue="Churn")
gr = gr.map_diag(plt.hist)
gr = gr.map_offdiag(plt.scatter)
gr = gr.add_legend()
# Configurar el tamaño de la trama.
fig, ax = plt.subplots(figsize=(6,6))
# Distribución de atributos
a = sns.boxplot(orient="v", palette="hls", data=df.iloc[:, totalCharges], fliersize=14)
# Distribución de datos de cargas totales.
histogram = sns.distplot(df.iloc[:, totalCharges], hist=True)
plt.show()
tenure = df.columns.get_loc("tenure")
print(tenure)
# Distribución de permanencia de datos (Tenure).
histogram = sns.distplot(df.iloc[:, tenure], hist=True)
plt.show()
monthly = df.columns.get_loc("MonthlyCharges")
print(monthly)
# Distribución de datos de cargos mensuales.
histogram = sns.distplot(df.iloc[:, monthly], hist=True)
plt.show()
###Output
_____no_output_____
###Markdown
Entender la distribución de datos 3.0 Crear un modelo
###Code
from pyspark.sql import SparkSession
import pandas as pd
import json
spark = SparkSession.builder.getOrCreate()
df_data = spark.createDataFrame(df)
df_data.head()
###Output
_____no_output_____
###Markdown
3.1 Dividir los datos en conjuntos de entrenamiento y de prueba.
###Code
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
print("Número de registros para entrenamiento: " + str(train_data.count()))
print("Número de registros para evaluación: " + str(test_data.count()))
###Output
_____no_output_____
###Markdown
3.2 Examinar el esquema de marco de datos (DataFrame Schema) de 'Spark'.Observa los tipos de datos para determinar los requerimientos para la ingeniería de características.
###Code
spark_df.printSchema()
###Output
_____no_output_____
###Markdown
3.3 Usar 'StringIndexer' para codificar una columna de etiquetas 'string' a una columna de etiquetas índice.
###Code
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
si_gender = StringIndexer(inputCol = 'gender', outputCol = 'gender_IX')
si_Partner = StringIndexer(inputCol = 'Partner', outputCol = 'Partner_IX')
si_Dependents = StringIndexer(inputCol = 'Dependents', outputCol = 'Dependents_IX')
si_PhoneService = StringIndexer(inputCol = 'PhoneService', outputCol = 'PhoneService_IX')
si_MultipleLines = StringIndexer(inputCol = 'MultipleLines', outputCol = 'MultipleLines_IX')
si_InternetService = StringIndexer(inputCol = 'InternetService', outputCol = 'InternetService_IX')
si_OnlineSecurity = StringIndexer(inputCol = 'OnlineSecurity', outputCol = 'OnlineSecurity_IX')
si_OnlineBackup = StringIndexer(inputCol = 'OnlineBackup', outputCol = 'OnlineBackup_IX')
si_DeviceProtection = StringIndexer(inputCol = 'DeviceProtection', outputCol = 'DeviceProtection_IX')
si_TechSupport = StringIndexer(inputCol = 'TechSupport', outputCol = 'TechSupport_IX')
si_StreamingTV = StringIndexer(inputCol = 'StreamingTV', outputCol = 'StreamingTV_IX')
si_StreamingMovies = StringIndexer(inputCol = 'StreamingMovies', outputCol = 'StreamingMovies_IX')
si_Contract = StringIndexer(inputCol = 'Contract', outputCol = 'Contract_IX')
si_PaperlessBilling = StringIndexer(inputCol = 'PaperlessBilling', outputCol = 'PaperlessBilling_IX')
si_PaymentMethod = StringIndexer(inputCol = 'PaymentMethod', outputCol = 'PaymentMethod_IX')
si_Label = StringIndexer(inputCol="Churn", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_Label.labels)
###Output
_____no_output_____
###Markdown
3.4 Crear un vector.
###Code
va_features = VectorAssembler(inputCols=['gender_IX', 'SeniorCitizen', 'Partner_IX', 'Dependents_IX', 'PhoneService_IX', 'MultipleLines_IX', 'InternetService_IX', \
'OnlineSecurity_IX', 'OnlineBackup_IX', 'DeviceProtection_IX', 'TechSupport_IX', 'StreamingTV_IX', 'StreamingMovies_IX', \
'Contract_IX', 'PaperlessBilling_IX', 'PaymentMethod_IX', 'TotalCharges', 'MonthlyCharges'], outputCol="features")
###Output
_____no_output_____
###Markdown
3.5 Crear una 'pipeline', y ajustar un modelo usando 'RandomForestClassifier'. Montar todas las etapas a una 'pipeline'. No esperamos una regresión lineal limpia, así que usaremos 'RandomForestClassifier' para encontrar el mejor árbol de decisión para nuestros datos.
###Code
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages=[si_gender, si_Partner, si_Dependents, si_PhoneService, si_MultipleLines, si_InternetService, si_OnlineSecurity, si_OnlineBackup, si_DeviceProtection, \
si_TechSupport, si_StreamingTV, si_StreamingMovies, si_Contract, si_PaperlessBilling, si_PaymentMethod, si_Label, va_features, \
classifier, label_converter])
model = pipeline.fit(train_data)
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction")
area_under_curve = evaluatorDT.evaluate(predictions)
#La evauación predeterminada es 'areaUnderROC'
print("areaUnderROC = %g" % area_under_curve)
###Output
_____no_output_____
###Markdown
4.0 Guardar el modelo y probar los datos. Agrega un nombre único para el NOMBRE_DEL_MODELO.
###Code
NOMBRE_DEL_MODELO = "mi-modelo mi-fecha"
###Output
_____no_output_____
###Markdown
4.1 Guardar el modelo en 'ICP4D local Watson Machine Learning'.Reemplaza el nombre de usuario ('username') y contraseña ('password') Con tus credenciales de 'Watson Machine Learning'.El URL debe ser el mismo de tu 'Data Cluster'.
###Code
from watson_machine_learning_client import WatsonMachineLearningAPIClient
wml_credentials = {
"url": "https://zen-cpd-zen.apps.os-workshop-nov22.vz-cpd-nov22.com",
"username": "*****",
"password" : "*****",
"instance_id": "wml_local",
"version" : "2.5.0"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
client.spaces.list()
###Output
_____no_output_____
###Markdown
Usar el espacio deseado como 'default_space'Poner el 'GUID' del espacio deseado como un parámetro debajo.
###Code
client.set.default_space('<GUID>')
# Guardar el modelo
model_props = {client.repository.ModelMetaNames.NAME: MODEL_NAME,
client.repository.ModelMetaNames.RUNTIME_UID : "spark-mllib_2.3",
client.repository.ModelMetaNames.TYPE : "mllib_2.3"}
published_model = client.repository.store_model(model=model, pipeline=pipeline, meta_props=model_props, training_data=train_data)
# Usar esta celda para cualquier limpieza de modelos y despliegues previos.
client.repository.list_models()
client.deployments.list()
# client.repository.delete('GUID del modelo guardado')
# client.deployments.delete('GUID del modelo desplegado')
###Output
_____no_output_____
###Markdown
4.2 Escribe los datos de prueba sin ninguna etiqueta a un .csv, para usarlo después como puntuación por lotes.
###Code
write_score_CSV=test_data.toPandas().drop(['Churn'], axis=1)
write_score_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLBatchScore.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
4.3 Escribe los datos de prueba a un .csv para usarlos posteriormente para la evaluación.
###Code
write_eval_CSV=test_data.toPandas()
write_eval_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLEval.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
Telco Customer Churn for ICP4D We'll use this notebook to create a machine learning model to predict customer churn. 1.0 Install required packages
###Code
!pip install --user watson-machine-learning-client --upgrade | tail -n 1
!pip install --user pyspark==2.3.3 --upgrade|tail -n 1
###Output
_____no_output_____
###Markdown
2.0 Load and Clean dataWe'll load our data as a pandas data frame.* Highlight the cell below by clicking it.* Click the `10/01` "Find data" icon in the upper right of the notebook.* If you are using Virtualized data, begin by choosing the `Files` tab. Then choose your virtualized data (i.e. MYSCHEMA.BILLINGPRODUCTCUSTOMERS), click `Insert to code` and choose `Insert Pandas DataFrame`.* If you are using this notebook without virtualized data, add the locally uploaded file `Telco-Customer-Churn.csv` by choosing the `Files` tab. Then choose the `Telco-Customer-Churn.csv`. Click `Insert to code` and choose `Insert Pandas DataFrame`.* The code to bring the data into the notebook environment and create a Pandas DataFrame will be added to the cell below.* Run the cell
###Code
# Place cursor below and insert the Pandas DataFrame for the Telco churn data
import pandas as pd
###Output
_____no_output_____
###Markdown
We'll use the Pandas naming convention df for our DataFrame. Make sure that the cell below uses the name for the dataframe used above. For the locally uploaded file it should look like df_data_1 or df_data_2 or df_data_x. For the virtualized data case it should look like data_df_1 or data_df_2 or data_df_x.
###Code
# for virtualized data
# df = data_df_1
# for local upload
df = df_data_1
###Output
_____no_output_____
###Markdown
2.1 Drop CustomerID feature (column)
###Code
df = df.drop('customerID', axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
2.2 Examine the data types of the features
###Code
df.info()
###Output
_____no_output_____
###Markdown
2.3 Check for need to convert TotalCharges column to numeric if it is detected as objectIf the above `df.info` shows the "TotalCharges" columnn as an object, we'll need to convert it to numeric. If you have already done this during a previous exercise for "Data Visualization with Data Refinery", you can skip to step `2.5`.
###Code
totalCharges = df.columns.get_loc("TotalCharges")
print(totalCharges)
new_col = pd.to_numeric(df.iloc[:, totalCharges], errors='coerce')
new_col
###Output
_____no_output_____
###Markdown
2.4 Modify our dataframe to reflect the new datatype
###Code
df.iloc[:, totalCharges] = pd.Series(new_col)
df
###Output
_____no_output_____
###Markdown
2.5 Any NaN values should be removed to create a more accurate model.
###Code
# Check if we have any NaN values
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
Set `nan_column` to the column number for TotalCharges (starting at 0).
###Code
nan_column = df.columns.get_loc("TotalCharges")
print(nan_column)
# Handle missing values for nan_column (TotalCharges)
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values="NaN", strategy="mean")
df.iloc[:, nan_column] = imp.fit_transform(df.iloc[:, nan_column].values.reshape(-1, 1))
df.iloc[:, nan_column] = pd.Series(df.iloc[:, nan_column])
# Check if we have any NaN values
df.isnull().values.any()
###Output
_____no_output_____
###Markdown
2.6 Visualize data
###Code
import json
import os
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing, svm
from itertools import combinations
from sklearn.preprocessing import PolynomialFeatures, LabelEncoder, StandardScaler
import sklearn.feature_selection
from sklearn.model_selection import train_test_split
from collections import defaultdict
from sklearn import metrics
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="tenure", hue="Churn", data=df)
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="Contract", hue="Churn", data=df)
# Plot Tenure Frequency count
sns.set(style="darkgrid")
sns.set_palette("hls", 3)
fig, ax = plt.subplots(figsize=(20,10))
ax = sns.countplot(x="TechSupport", hue="Churn", data=df)
# Create Grid for pairwise relationships
gr = sns.PairGrid(df, size=5, hue="Churn")
gr = gr.map_diag(plt.hist)
gr = gr.map_offdiag(plt.scatter)
gr = gr.add_legend()
# Set up plot size
fig, ax = plt.subplots(figsize=(6,6))
# Attributes destribution
a = sns.boxplot(orient="v", palette="hls", data=df.iloc[:, totalCharges], fliersize=14)
# Total Charges data distribution
histogram = sns.distplot(df.iloc[:, totalCharges], hist=True)
plt.show()
tenure = df.columns.get_loc("tenure")
print(tenure)
# Tenure data distribution
histogram = sns.distplot(df.iloc[:, tenure], hist=True)
plt.show()
monthly = df.columns.get_loc("MonthlyCharges")
print(monthly)
# Monthly Charges data distribution
histogram = sns.distplot(df.iloc[:, monthly], hist=True)
plt.show()
###Output
_____no_output_____
###Markdown
Understand Data Distribution 3.0 Create a model
###Code
from pyspark.sql import SparkSession
import pandas as pd
import json
spark = SparkSession.builder.getOrCreate()
df_data = spark.createDataFrame(df)
df_data.head()
###Output
_____no_output_____
###Markdown
3.1 Split the data into training and test sets
###Code
spark_df = df_data
(train_data, test_data) = spark_df.randomSplit([0.8, 0.2], 24)
print("Number of records for training: " + str(train_data.count()))
print("Number of records for evaluation: " + str(test_data.count()))
###Output
_____no_output_____
###Markdown
3.2 Examine the Spark DataFrame SchemaLook at the data types to determine requirements for feature engineering
###Code
spark_df.printSchema()
###Output
_____no_output_____
###Markdown
3.3 Use StringIndexer to encode a string column of labels to a column of label indices
###Code
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml import Pipeline, Model
si_gender = StringIndexer(inputCol = 'gender', outputCol = 'gender_IX')
si_Partner = StringIndexer(inputCol = 'Partner', outputCol = 'Partner_IX')
si_Dependents = StringIndexer(inputCol = 'Dependents', outputCol = 'Dependents_IX')
si_PhoneService = StringIndexer(inputCol = 'PhoneService', outputCol = 'PhoneService_IX')
si_MultipleLines = StringIndexer(inputCol = 'MultipleLines', outputCol = 'MultipleLines_IX')
si_InternetService = StringIndexer(inputCol = 'InternetService', outputCol = 'InternetService_IX')
si_OnlineSecurity = StringIndexer(inputCol = 'OnlineSecurity', outputCol = 'OnlineSecurity_IX')
si_OnlineBackup = StringIndexer(inputCol = 'OnlineBackup', outputCol = 'OnlineBackup_IX')
si_DeviceProtection = StringIndexer(inputCol = 'DeviceProtection', outputCol = 'DeviceProtection_IX')
si_TechSupport = StringIndexer(inputCol = 'TechSupport', outputCol = 'TechSupport_IX')
si_StreamingTV = StringIndexer(inputCol = 'StreamingTV', outputCol = 'StreamingTV_IX')
si_StreamingMovies = StringIndexer(inputCol = 'StreamingMovies', outputCol = 'StreamingMovies_IX')
si_Contract = StringIndexer(inputCol = 'Contract', outputCol = 'Contract_IX')
si_PaperlessBilling = StringIndexer(inputCol = 'PaperlessBilling', outputCol = 'PaperlessBilling_IX')
si_PaymentMethod = StringIndexer(inputCol = 'PaymentMethod', outputCol = 'PaymentMethod_IX')
si_Label = StringIndexer(inputCol="Churn", outputCol="label").fit(spark_df)
label_converter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=si_Label.labels)
###Output
_____no_output_____
###Markdown
3.4 Create a single vector
###Code
va_features = VectorAssembler(inputCols=['gender_IX', 'SeniorCitizen', 'Partner_IX', 'Dependents_IX', 'PhoneService_IX', 'MultipleLines_IX', 'InternetService_IX', \
'OnlineSecurity_IX', 'OnlineBackup_IX', 'DeviceProtection_IX', 'TechSupport_IX', 'StreamingTV_IX', 'StreamingMovies_IX', \
'Contract_IX', 'PaperlessBilling_IX', 'PaymentMethod_IX', 'TotalCharges', 'MonthlyCharges'], outputCol="features")
###Output
_____no_output_____
###Markdown
3.5 Create a pipeline, and fit a model using RandomForestClassifier Assemble all the stages into a pipeline. We don't expect a clean linear regression, so we'll use RandomForestClassifier to find the best decision tree for the data.
###Code
classifier = RandomForestClassifier(featuresCol="features")
pipeline = Pipeline(stages=[si_gender, si_Partner, si_Dependents, si_PhoneService, si_MultipleLines, si_InternetService, si_OnlineSecurity, si_OnlineBackup, si_DeviceProtection, \
si_TechSupport, si_StreamingTV, si_StreamingMovies, si_Contract, si_PaperlessBilling, si_PaymentMethod, si_Label, va_features, \
classifier, label_converter])
model = pipeline.fit(train_data)
predictions = model.transform(test_data)
evaluatorDT = BinaryClassificationEvaluator(rawPredictionCol="prediction")
area_under_curve = evaluatorDT.evaluate(predictions)
#default evaluation is areaUnderROC
print("areaUnderROC = %g" % area_under_curve)
###Output
_____no_output_____
###Markdown
4.0 Save the model and test data Add a unique name for MODEL_NAME.
###Code
MODEL_NAME = "my-model my-date"
###Output
_____no_output_____
###Markdown
4.1 Save the model to ICP4D local Watson Machine LearningReplace the `username` and `password` values of `*****` with your Cloud Pak for Data `username` and `password`.The value for `url` should match the `url` for your Cloud Pak for Data cluster.
###Code
from watson_machine_learning_client import WatsonMachineLearningAPIClient
wml_credentials = {
"url": "https://zen-cpd-zen.apps.os-workshop-nov22.vz-cpd-nov22.com",
"username": "*****",
"password" : "*****",
"instance_id": "wml_local",
"version" : "2.5.0"
}
client = WatsonMachineLearningAPIClient(wml_credentials)
client.spaces.list()
###Output
_____no_output_____
###Markdown
Use the desired space as the `default_space`Put the `GUID` of the desired space as the parameter below
###Code
client.set.default_space('<GUID>')
# Store our model
model_props = {client.repository.ModelMetaNames.NAME: MODEL_NAME,
client.repository.ModelMetaNames.RUNTIME_UID : "spark-mllib_2.3",
client.repository.ModelMetaNames.TYPE : "mllib_2.3"}
published_model = client.repository.store_model(model=model, pipeline=pipeline, meta_props=model_props, training_data=train_data)
# Use this cell to do any cleanup of previously created models and deployments
client.repository.list_models()
client.deployments.list()
# client.repository.delete('GUID of stored model')
# client.deployments.delete('GUID of deployed model')
###Output
_____no_output_____
###Markdown
4.2 Write the test data without label to a .csv so that we can later use it for batch scoring
###Code
write_score_CSV=test_data.toPandas().drop(['Churn'], axis=1)
write_score_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLBatchScore.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
4.3 Write the test data to a .csv so that we can later use it for evaluation
###Code
write_eval_CSV=test_data.toPandas()
write_eval_CSV.to_csv('/project_data/data_asset/TelcoCustomerSparkMLEval.csv', sep=',', index=False)
###Output
_____no_output_____ |
remote_sensing/python/Local_Jupyter_NoteBooks/Acre per Crop Group - Mike Brady - Douglas.ipynb | ###Markdown
Acre per Crop Group - Mike Brady - Douglas
###Code
# import warnings
# warnings.filterwarnings("ignore")
import csv
import numpy as np
import pandas as pd
# import geopandas as gpd
from IPython.display import Image
# from shapely.geometry import Point, Polygon
from math import factorial
import scipy
import scipy.signal
import os, os.path
from datetime import date
import datetime
import time
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from sklearn.linear_model import LinearRegression
from patsy import cr
# from pprint import pprint
import matplotlib.pyplot as plt
import seaborn as sb
import sys
# to move files from one directory to another
import shutil
sys.path.append('/Users/hn/Documents/00_GitHub/Ag/remote_sensing/python/')
import remote_sensing_core as rc
import remote_sensing_plot_core as rcp
start_time = time.time()
###Output
_____no_output_____
###Markdown
crop_type vs crop_group
###Code
data_dir = "/Users/hn/Documents/01_research_data/remote_sensing/01_Data_part_not_filtered/"
shapeFile_2016 = pd.read_csv(data_dir + f_names[0], low_memory = False)
shapeFile_2017 = pd.read_csv(data_dir + f_names[1], low_memory = False)
shapeFile_2018 = pd.read_csv(data_dir + f_names[1], low_memory = False)
shapeFile_2016.head(5)
###Output
_____no_output_____
###Markdown
Find Acreage per crop group and croup type in Douglas
###Code
data_dir = "/Users/hn/Documents/01_research_data/remote_sensing/" + \
"01_NDVI_TS/70_Cloud/00_Eastern_WA_withYear/2Years/Douglas_MikeBrady/"
param_dir = "/Users/hn/Documents/00_GitHub/Ag/remote_sensing/parameters/"
given_county = "Douglas"
SG_win_size = 7
SG_poly_Order = 3
years = [2016, 2017, 2018]
file_names = ["Douglas_2016_regular_EVI_SG_win7_Order3.csv",
"Douglas_2017_regular_EVI_SG_win7_Order3.csv",
"Douglas_2018_regular_EVI_SG_win7_Order3.csv"]
double_crop_potential = pd.read_csv(param_dir + "double_crop_potential_plants.csv")
irrigated_only = True
NASS_out = False
only_annuals = True
small_fields_out = False
Last_survey_year = False
remove_columns = ["RtCrpTy", "SOS", "EOS", "human_system_start_time", "Shp_Lng", "Shap_Ar",
"TRS", "doy", "IntlSrD"]
for file in file_names:
print (file)
original_data_tble = pd.read_csv(data_dir + file, low_memory = False)
data_tble = original_data_tble.copy()
if irrigated_only == True:
data_tble = rc.filter_out_nonIrrigated(data_tble)
irrigated_name = "onlyIrrigated"
else:
irrigated_name = "irrigatedAndNonIrr"
if NASS_out == True:
data_tble = rc.filter_out_NASS(data_tble)
NASS_out_name = "NASSOut"
else:
NASS_out_name = "allSurvySources"
if only_annuals == True:
data_tble = data_tble[data_tble.CropTyp.isin(double_crop_potential['Crop_Type'])]
only_annuals_name = "onlyAnnuals"
else:
only_annuals_name = "AnnualsAndPerenials"
if small_fields_out == True:
data_tble = data_tble[data_tble.Acres > 3]
small_fields_name = "bigFields"
else:
small_fields_name = "bigAndSmallFields"
proper_year = file.split("_")[1]
proper_year = proper_year.split(".")[0]
if Last_survey_year == True:
data_tble = data_tble[data_tble['LstSrvD'].str.contains(proper_year)]
Last_survey_year_name = "correctLstSrvD"
else:
Last_survey_year_name = "wrongLstSrvD"
CropGrp_acreage = data_tble.groupby(["county", "CropGrp", "season_count"]).Acres.sum().reset_index()
CropType_acreage = data_tble.groupby(["county", "CropTyp", "season_count"]).Acres.sum().reset_index()
# Saving path
out_dir = data_dir + "/acreage_tables/"
out_dir = data_dir
os.makedirs(out_dir, exist_ok=True)
CropGrp_acreage_name = out_dir + proper_year + "_" + \
"Douglas_CropGrp_doubleAcr_" + \
irrigated_name + "_" + \
NASS_out_name + "_" + \
only_annuals_name + "_" + \
small_fields_name + "_" + \
Last_survey_year_name + "_" + \
".csv"
CropType_acreage_name = out_dir + proper_year + "_" + \
"Douglas_CropType_doubleAcr_" + \
irrigated_name + "_" + \
NASS_out_name + "_" + \
only_annuals_name + "_" + \
small_fields_name + "_" + \
Last_survey_year_name + \
".csv"
CropGrp_acreage.to_csv(CropGrp_acreage_name, index = False)
CropType_acreage.to_csv(CropType_acreage_name, index = False)
###Output
Douglas_2016_regular_EVI_SG_win7_Order3.csv
Douglas_2017_regular_EVI_SG_win7_Order3.csv
Douglas_2018_regular_EVI_SG_win7_Order3.csv
###Markdown
print (data_tble.DataSrc.unique())print (data_tble.Irrigtn.unique())print (data_tble.CropTyp.unique())print (data_tble.season_count.unique())print (data_tble.LstSrvD.unique())print (data_tble.image_year.unique())print (data_tble.SF_year.unique())
###Code
data_tble.head(2)
CropGrp_acreage
###Output
_____no_output_____ |
Recommender.ipynb | ###Markdown
A Recommendation System from RedditIn this report, I aim to build a recommendation system for Reddit users (refered to as Redditors) using only their Reddit username. MotivationIn the age of the internet consumers have an abundance of choice. From movies to songs to every concievable product, it seems like we have access to more choice than we can every hope to exercise. To make the traversal of this infromation more manageable, many companies build recommendation systems to help their users find more meaningful content or products, with great success. Platforms like Netflix and YouTube would simply not exist if they could not match users with personalized content given their vast content libraries.However, the value we gain from these recommendation engines come at a cost. These websites collect a tremendous amount of data from us. Google and Facebook particularly track our every movement, not just on the internet but, since the proliferation of smartphones, in the physical world as well. In addition, this is a huge barrier to entry for startups since the information that sites like Amazon and Facebook have about their customers allows them to provide better suggestions than new platforms.This is why I set out to see if I could build a recommendation system for users who I have not collected any prior information from. I plan to explore how we can leverage existing, public data about a user to suggest interesting items they may find interesting. I wish to deploy this into a site that encourages a community to share interests and ideas together rather than with a corporation that is trying to sell them something. The ProductsFirst of all, I had to select which products I wanted to suggest. Netflix suggests movies, YouTube suggests videos and Spotify suggests songs, but Amazon suggests all of these things and a whole lot more. However, Amazon's product discovery algorithm is mainly built on items that someone has already purchased. As a result, a user only gets recommendations based on products they are familiar with. This is to Amazon's advantage, since these products likely have a very high conversion rate. However, it does not lend itself well to finding novel products that are based on our interests rather than our purchase history. Reddit as a window to the soulReddit is one of the most visited sites on the internet. It is a meta forum where users can post links to interesting items for every topic imaginable and discuss them with millions of other people. One of the strongest reasons why Reddit appeals to so many is because of its anonymity. It is one of the few forums where you can sign up without an email address, and is a far cry from something like Facebook where you have profile pictures and family information. Therefore, redditors can be open and expressive about their interests. Indeed, their personalities are associated with their Reddit usernames rather than their real names. This may lead to trolling but that is outside the scope of this exercise. The true value of Reddit for this tool is that redditors comment on different topics, and we can access these topics. This means that by accessing the topics where they comment (known as subreddits), we can see which topics they are interested in. A clear limitation of this is that certain redditors only browse subreddits without commenting on them (colloquially known as 'lurking'). There is no way to access this information. Indeed, I do not even think that reddit makes the subreddits that a user is subscribed to public. Therefore, we have to use the comments and get the subreddits from them. Collaborative FilteringThere are a variety of ways that recommender systems can work. In order to harness the power of the social fabric of Reddit, I will use 'Collaborative Filtering'. Collaborative Filtering is a way to utilize user inputs to suggest products to similar users. Usually, it is done in the manner that if User 1 likes X,Y and Z and User 2 likes X and Y, there is a high probability that User 2 likes Z. This is the way that Amazon does it. However, we do not have access to the users purchase history and, like mentioned above, we want to suggest novel products that are independent of our target user's purchase history. Therefore, we extend the same concept to subreddits rather than products. In our case, we postulate that if User X and User Y comment on similar topics, they have similar tastes and therefore we can recommend the products that User X likes to User Y. Getting DataFor Collaborative Filtering to work, however, we need a seed dataset of products to suggest to our users. This is where most platforms fail in their recommender systems. Without this critical mass of data, there would be nothing to suggest. However, upon research, I found a particular subreddit that provided me with a workaround; r/randomactsofamazon. R/RandomActsOfAmazon is an offshoot of an idea that the Reddit community has had for a long time, the act of randomly gifting something to another redditor. It originated with R/RandonActsOfPizza where, like the title would suggest, redditors would gift each other pizza delivered to their doorstep. R/RandomActsOfAmazon takes this concept further and allows redditors to post wishlists of things they want from Amazon. These redditors then hope that some stranger might like them and buy them a gift from their wishlist. This data is perfect for me as this contains the products that a particular redditor wants/thinks are cool. After contacting the moderators for r/randomactsofamazon, I was referred to http://n8fq.org/temp/links.sql This is an sql dump of the entire database of active users on r/randomactsofamazon. It is a public site that is used to power functionality for r/randomactsofamazon and I was given permission to use it. I was able to parse this sql dump into a database and convert it into a csv file in another script (see github repo). I will use this csv file for my recommendation engine. I will also encourage users to gift their 'Reddit Twin' (i.e. the redditor in my database who is thier closest match) something from their wishlist. However, since I do not have informed consent for every reddit user in my database, I will not disclose the reddit id of their twin. Amazon holds the address of the wishlist owner secret to allow for safe, anonymous gifting. Now, lets get to the good stuff. Lets import our dependecies. In this case, we need pandas to manipulate our data and PRAW, the Python Reddit API Wrapper.Find out more about PRAW at https://praw.readthedocs.io/en/latest/
###Code
import pandas as pd
import praw
###Output
_____no_output_____
###Markdown
We instantiate our Reddit instance. We have registered this script with Reddit and obtained a key (client secret). The account details are mine.
###Code
reddit = praw.Reddit(client_id = 'tOwGdbmWXETjEA',
client_secret = 'FPfgf52wvSG0TN-y_wlTRavGlpU',
username = 'amazonrecommender',
password = 'amazon',
user_agent='test')
getsubs('abhi91')
###Output
_____no_output_____
###Markdown
Lets import the data. We will have 3 columns, the username a raw link and a processed link that I have cleaned up and added tracking information to. This will allow me to monitor the traffic being sent to Amazon when the site is published. We will use 'New Url' in this program.
###Code
df = pd.DataFrame.from_csv('newlist.csv')
df.head()
###Output
_____no_output_____
###Markdown
We now write a function that allows us to utilize PRAW to get the subreddits that a redditor has commented in. To keep in touch with the redditors current interests, we will only identify the most recent 500 comments. Note that we are using a set so that we have unique subreddits and these are not in any order. Also we need to normalize these by removing the most popular subreddits as well as randomactsofamazon. This will ensure that the subreddits that influence unique tastes are given higher weight.
###Code
defaultsubs = set(['announcements','funny','AskReddit','todayilearned','science','pics','IAmA','randomactsofamazon'])
def normalizesubs(usersubs):
normalizedsubs = usersubs - defaultsubs
return (normalizedsubs)
def getsubs(username):
subs = set()
redditor = reddit.redditor(str(username))
for comment in redditor.comments.new(limit=500):
subs.add(str(comment.subreddit))
#normalize subs by removing most popular subs.
subs=normalizesubs(subs)
return(subs)
###Output
_____no_output_____
###Markdown
In order to give our user a good experience, we need to minimize loading times for our recommendations. The most time consuming part of this whole script is extracting all the comments from a user. We can minimize this time by pulling a list of subreddits that our database of redditors have commented in as of today (12/10/2017). While this will mean that our recommendation engine is not dynamic with the changing interests of our database redditors, it significantly cuts down on loading time. We can update this list of subreddits at regular intervals in the future. We will append this list of subreddits to our dataframe.
###Code
#takes a long time. Just import redditproducts.csv and the code in that cell
subslist = []
for index,rows in df.iterrows():
user = rows['User']
print('Trying for user number: ',index,' username: ',user)
try:
subslist.append(getsubs(user))
except :
#if there is an error, the user has deleted their reddit account. Store Null and continue with the loop
subslist.append('Null')
print('user not there')
continue
df['Subs']=subslist
###Output
Trying for user number: 0 username: Ali-Sama
Trying for user number: 1 username: Rysona
Trying for user number: 2 username: G0ATLY
Trying for user number: 3 username: dancemasterv
Trying for user number: 4 username: chunkopunk
Trying for user number: 5 username: Browntizzle
Trying for user number: 6 username: rockinDS24
Trying for user number: 7 username: rsgamg
Trying for user number: 8 username: 82364
Trying for user number: 9 username: RyanOver9000
Trying for user number: 10 username: L_Cranston_Shadow
Trying for user number: 11 username: neuromorph
Trying for user number: 12 username: havechanged
Trying for user number: 13 username: TAPorter
Trying for user number: 14 username: cj151695
Trying for user number: 15 username: NibbleFish
Trying for user number: 16 username: ninja_nicci
Trying for user number: 17 username: ShiroiMana
Trying for user number: 18 username: mitsimac
Trying for user number: 19 username: FirstLadyOfBeer
Trying for user number: 20 username: imscreamingrightnow
Trying for user number: 21 username: whiteandyellow
Trying for user number: 22 username: Balaguru_BR5
Trying for user number: 23 username: kanooka
Trying for user number: 24 username: ScarlettMaeBottom
Trying for user number: 25 username: Jacewoop23
Trying for user number: 26 username: t-minus5tosexy
Trying for user number: 27 username: MagicMistoffelees
Trying for user number: 28 username: Allybama
Trying for user number: 29 username: PapaRomeoAlpha
Trying for user number: 30 username: spilledbeans
Trying for user number: 31 username: nddragoon
Trying for user number: 32 username: ProbableErorr
Trying for user number: 33 username: lcarosella
Trying for user number: 34 username: theregoesmyeye
user not there
Trying for user number: 35 username: discojaxx
Trying for user number: 36 username: Rayn_the_hunter
Trying for user number: 37 username: Envy_The_Jealous
Trying for user number: 38 username: TrocheeCity
Trying for user number: 39 username: Bucket4Life
Trying for user number: 40 username: ruphuselderbeer
Trying for user number: 41 username: Enovara
Trying for user number: 42 username: SEQU0IA
Trying for user number: 43 username: LittleMizPixie
Trying for user number: 44 username: spacesleep
Trying for user number: 45 username: EriSeguchi
Trying for user number: 46 username: kurosaba
Trying for user number: 47 username: Sp3cia1K
Trying for user number: 48 username: MMAPhreak21
Trying for user number: 49 username: SailoLee
Trying for user number: 50 username: Sieberella
Trying for user number: 51 username: GrtNPwrfulOz
Trying for user number: 52 username: kirklazaurs87
Trying for user number: 53 username: rarelyserious
Trying for user number: 54 username: giggidywarlock
user not there
Trying for user number: 56 username: TwistedEnigma
Trying for user number: 57 username: watsoned
Trying for user number: 58 username: SukiTheGoat
Trying for user number: 59 username: MCubb
Trying for user number: 60 username: neongreenpurple
Trying for user number: 61 username: mewfasa
Trying for user number: 62 username: mattidallama
Trying for user number: 63 username: burke_no_sleeps
Trying for user number: 64 username: Headcall
Trying for user number: 65 username: paradoxikal
Trying for user number: 66 username: MKandtheforce
Trying for user number: 67 username: SirRipo
Trying for user number: 68 username: Rubenick
Trying for user number: 69 username: wee-pixie
user not there
Trying for user number: 70 username: saratonin84
Trying for user number: 71 username: EdenSB
Trying for user number: 72 username: Girfex
Trying for user number: 73 username: windurr
Trying for user number: 74 username: s2xtreme4u
Trying for user number: 75 username: dragonflyjen
Trying for user number: 76 username: HeadlessBob17
Trying for user number: 77 username: krispykremedonuts
Trying for user number: 78 username: legotech
Trying for user number: 79 username: Akeleie
Trying for user number: 80 username: glanmiregirl
Trying for user number: 81 username: BuffHagen
user not there
Trying for user number: 82 username: thatbossguy
Trying for user number: 83 username: Matronix
Trying for user number: 84 username: GhostOfTheNet
Trying for user number: 85 username: Brenhines
Trying for user number: 86 username: teatimeoclock
Trying for user number: 87 username: kayleighh
Trying for user number: 88 username: xX_Justin_Xx
Trying for user number: 89 username: Ajs1004
Trying for user number: 90 username: RCJhawk
Trying for user number: 91 username: ItsACharlieDay
Trying for user number: 92 username: OfMonstersAndSuicide
Trying for user number: 93 username: angelninja
Trying for user number: 94 username: Roehok
Trying for user number: 95 username: cupcakegiraffe
Trying for user number: 96 username: Kibure
Trying for user number: 97 username: LeftMySoulAtHome
Trying for user number: 98 username: paintnwood
Trying for user number: 99 username: lobstahfi
Trying for user number: 100 username: origami_rock
Trying for user number: 101 username: natalietoday
Trying for user number: 102 username: OverlyApologeticGuy
Trying for user number: 103 username: Chaela
Trying for user number: 104 username: bethydolla
Trying for user number: 105 username: Ereshkigal234
Trying for user number: 106 username: drzedwordhunter
Trying for user number: 107 username: hannaHananaB
Trying for user number: 108 username: ebooksgirl
Trying for user number: 109 username: 0hfuck
Trying for user number: 111 username: homeallday
Trying for user number: 112 username: 3dmesh
Trying for user number: 113 username: theCaitiff
Trying for user number: 114 username: Coffin_Nail
Trying for user number: 115 username: SphynxKitty
Trying for user number: 116 username: sylvar
Trying for user number: 117 username: Candroth
Trying for user number: 118 username: OptomisticOcelot
Trying for user number: 119 username: cthylla
Trying for user number: 120 username: effeduphealer
Trying for user number: 121 username: Liies
Trying for user number: 122 username: ichosethis
Trying for user number: 123 username: EmergencyPizza
Trying for user number: 124 username: aimeenew
Trying for user number: 125 username: stutterbutt
Trying for user number: 126 username: ApeOver
Trying for user number: 127 username: timelady84
Trying for user number: 128 username: trisarahdactyl
Trying for user number: 129 username: KittenAnne
Trying for user number: 130 username: Nemesis0320
Trying for user number: 131 username: travelersoul
Trying for user number: 132 username: draconorge
Trying for user number: 133 username: MsRocky
Trying for user number: 134 username: DiscoKittie
Trying for user number: 135 username: angel92591
Trying for user number: 136 username: R3bel_R3bel
Trying for user number: 137 username: Dee_Doubleyew_TTT
Trying for user number: 138 username: Junigole
Trying for user number: 139 username: hellooolady
Trying for user number: 140 username: titchard
Trying for user number: 141 username: Shercock_Holmes
Trying for user number: 142 username: book_worm526
Trying for user number: 143 username: themusicliveson
Trying for user number: 144 username: writerlib
Trying for user number: 145 username: Janiichan
Trying for user number: 146 username: UnBornPorn
Trying for user number: 147 username: nijoli
Trying for user number: 148 username: DoodlesAndSuch
Trying for user number: 149 username: imsamsam
Trying for user number: 150 username: Sneekyninja
Trying for user number: 151 username: browneyedgirl79
Trying for user number: 152 username: MoonPrisimPower
Trying for user number: 153 username: DrUsual
Trying for user number: 154 username: ink1026
Trying for user number: 155 username: dapamico
Trying for user number: 156 username: melumebelle
Trying for user number: 157 username: earthsick
Trying for user number: 158 username: MisterJimJim
###Markdown
If you are running the script, run the cell below to avoid waiting a bunch for the code above.
###Code
#If you are running this script, please just use this as the Dataframe
df=pd.DataFrame.from_csv('redditproducts.csv')
# If you are running the script then run this cell. My functions
import ast
newlist = []
for i in range(len(df)):
try:
print(i)
newlist.append(ast.literal_eval(df['Subs'][i]))
print(type(newlist[i]))
except:
print(i,'Null Subs, user does not exist')
newlist.append('Null')
df['Subs']=newlist
###Output
0
<class 'set'>
1
<class 'set'>
2
<class 'set'>
3
<class 'set'>
4
<class 'set'>
5
<class 'set'>
6
<class 'set'>
7
<class 'set'>
8
<class 'set'>
9
<class 'set'>
10
<class 'set'>
11
<class 'set'>
12
<class 'set'>
13
<class 'set'>
14
<class 'set'>
15
<class 'set'>
16
<class 'set'>
17
<class 'set'>
18
<class 'set'>
19
<class 'set'>
20
<class 'set'>
21
<class 'set'>
22
<class 'set'>
23
<class 'set'>
24
<class 'set'>
25
<class 'set'>
26
<class 'set'>
27
<class 'set'>
28
<class 'set'>
29
<class 'set'>
30
<class 'set'>
31
<class 'set'>
32
<class 'set'>
33
<class 'set'>
34
34 Null Subs, user does not exist
35
<class 'set'>
36
<class 'set'>
37
<class 'set'>
38
<class 'set'>
39
<class 'set'>
40
<class 'set'>
41
<class 'set'>
42
<class 'set'>
43
<class 'set'>
44
<class 'set'>
45
<class 'set'>
46
<class 'set'>
47
<class 'set'>
48
<class 'set'>
49
<class 'set'>
50
<class 'set'>
51
<class 'set'>
52
<class 'set'>
53
<class 'set'>
54
54 Null Subs, user does not exist
55
55 Null Subs, user does not exist
56
<class 'set'>
57
<class 'set'>
58
<class 'set'>
59
<class 'set'>
60
<class 'set'>
61
<class 'set'>
62
<class 'set'>
63
<class 'set'>
64
<class 'set'>
65
<class 'set'>
66
<class 'set'>
67
<class 'set'>
68
<class 'set'>
69
69 Null Subs, user does not exist
70
<class 'set'>
71
<class 'set'>
72
<class 'set'>
73
<class 'set'>
74
<class 'set'>
75
<class 'set'>
76
<class 'set'>
77
<class 'set'>
78
<class 'set'>
79
<class 'set'>
80
<class 'set'>
81
81 Null Subs, user does not exist
82
<class 'set'>
83
<class 'set'>
84
<class 'set'>
85
<class 'set'>
86
<class 'set'>
87
<class 'set'>
88
<class 'set'>
89
<class 'set'>
90
<class 'set'>
91
<class 'set'>
92
<class 'set'>
93
<class 'set'>
94
<class 'set'>
95
<class 'set'>
96
<class 'set'>
97
<class 'set'>
98
<class 'set'>
99
<class 'set'>
100
<class 'set'>
101
<class 'set'>
102
<class 'set'>
103
<class 'set'>
104
<class 'set'>
105
<class 'set'>
106
<class 'set'>
107
<class 'set'>
108
<class 'set'>
109
<class 'set'>
110
110 Null Subs, user does not exist
111
<class 'set'>
112
<class 'set'>
113
<class 'set'>
114
<class 'set'>
115
<class 'set'>
116
<class 'set'>
117
<class 'set'>
118
<class 'set'>
119
<class 'set'>
120
<class 'set'>
121
<class 'set'>
122
<class 'set'>
123
<class 'set'>
124
<class 'set'>
125
<class 'set'>
126
<class 'set'>
127
<class 'set'>
128
<class 'set'>
129
<class 'set'>
130
<class 'set'>
131
<class 'set'>
132
<class 'set'>
133
<class 'set'>
134
<class 'set'>
135
<class 'set'>
136
<class 'set'>
137
<class 'set'>
138
<class 'set'>
139
<class 'set'>
140
<class 'set'>
141
<class 'set'>
142
<class 'set'>
143
143 Null Subs, user does not exist
144
<class 'set'>
145
<class 'set'>
146
<class 'set'>
147
<class 'set'>
148
<class 'set'>
149
<class 'set'>
150
<class 'set'>
151
<class 'set'>
152
<class 'set'>
153
<class 'set'>
154
<class 'set'>
155
<class 'set'>
156
<class 'set'>
157
<class 'set'>
158
<class 'set'>
159
<class 'set'>
160
<class 'set'>
161
<class 'set'>
162
<class 'set'>
163
<class 'set'>
164
<class 'set'>
165
<class 'set'>
166
<class 'set'>
167
167 Null Subs, user does not exist
168
<class 'set'>
169
<class 'set'>
170
<class 'set'>
171
<class 'set'>
172
<class 'set'>
173
<class 'set'>
174
<class 'set'>
175
<class 'set'>
176
<class 'set'>
177
<class 'set'>
178
<class 'set'>
179
<class 'set'>
180
<class 'set'>
181
<class 'set'>
182
<class 'set'>
183
<class 'set'>
184
<class 'set'>
185
<class 'set'>
186
<class 'set'>
187
<class 'set'>
188
<class 'set'>
189
<class 'set'>
190
<class 'set'>
191
<class 'set'>
192
<class 'set'>
193
<class 'set'>
194
<class 'set'>
195
<class 'set'>
196
<class 'set'>
197
<class 'set'>
198
<class 'set'>
199
<class 'set'>
200
<class 'set'>
201
<class 'set'>
202
<class 'set'>
203
<class 'set'>
204
<class 'set'>
205
<class 'set'>
206
<class 'set'>
207
<class 'set'>
208
<class 'set'>
209
<class 'set'>
210
<class 'set'>
211
<class 'set'>
212
<class 'set'>
213
<class 'set'>
214
<class 'set'>
215
<class 'set'>
216
<class 'set'>
217
<class 'set'>
218
<class 'set'>
219
<class 'set'>
220
<class 'set'>
221
<class 'set'>
222
<class 'set'>
223
<class 'set'>
224
<class 'set'>
225
<class 'set'>
226
<class 'set'>
227
<class 'set'>
228
<class 'set'>
229
<class 'set'>
230
<class 'set'>
231
<class 'set'>
232
<class 'set'>
233
<class 'set'>
234
<class 'set'>
235
<class 'set'>
236
<class 'set'>
237
<class 'set'>
238
<class 'set'>
239
<class 'set'>
240
<class 'set'>
241
<class 'set'>
242
<class 'set'>
243
<class 'set'>
244
<class 'set'>
245
245 Null Subs, user does not exist
246
<class 'set'>
247
247 Null Subs, user does not exist
248
<class 'set'>
249
<class 'set'>
250
<class 'set'>
251
<class 'set'>
252
<class 'set'>
253
<class 'set'>
254
<class 'set'>
255
<class 'set'>
256
<class 'set'>
257
<class 'set'>
258
<class 'set'>
259
<class 'set'>
260
<class 'set'>
261
<class 'set'>
262
<class 'set'>
263
<class 'set'>
264
<class 'set'>
265
<class 'set'>
266
<class 'set'>
267
<class 'set'>
268
<class 'set'>
269
<class 'set'>
270
<class 'set'>
271
<class 'set'>
272
<class 'set'>
273
<class 'set'>
274
<class 'set'>
275
<class 'set'>
276
<class 'set'>
277
<class 'set'>
278
<class 'set'>
279
<class 'set'>
280
<class 'set'>
281
<class 'set'>
282
282 Null Subs, user does not exist
283
<class 'set'>
284
<class 'set'>
285
<class 'set'>
286
<class 'set'>
287
<class 'set'>
288
<class 'set'>
289
<class 'set'>
290
<class 'set'>
291
<class 'set'>
292
<class 'set'>
293
<class 'set'>
294
<class 'set'>
295
<class 'set'>
296
<class 'set'>
297
<class 'set'>
298
<class 'set'>
299
<class 'set'>
300
<class 'set'>
301
<class 'set'>
302
<class 'set'>
303
<class 'set'>
304
<class 'set'>
305
<class 'set'>
306
<class 'set'>
307
<class 'set'>
308
<class 'set'>
309
<class 'set'>
310
<class 'set'>
311
<class 'set'>
312
<class 'set'>
313
<class 'set'>
314
<class 'set'>
315
<class 'set'>
316
<class 'set'>
317
<class 'set'>
318
<class 'set'>
319
<class 'set'>
320
<class 'set'>
321
<class 'set'>
322
<class 'set'>
323
<class 'set'>
324
<class 'set'>
325
<class 'set'>
326
<class 'set'>
327
<class 'set'>
328
<class 'set'>
329
<class 'set'>
330
<class 'set'>
331
<class 'set'>
332
<class 'set'>
333
<class 'set'>
334
<class 'set'>
335
<class 'set'>
336
<class 'set'>
337
<class 'set'>
338
<class 'set'>
339
<class 'set'>
340
<class 'set'>
341
<class 'set'>
342
<class 'set'>
343
<class 'set'>
344
<class 'set'>
345
<class 'set'>
346
<class 'set'>
347
<class 'set'>
348
<class 'set'>
349
<class 'set'>
350
<class 'set'>
351
<class 'set'>
352
<class 'set'>
353
<class 'set'>
354
<class 'set'>
355
<class 'set'>
356
<class 'set'>
357
<class 'set'>
358
<class 'set'>
359
<class 'set'>
360
<class 'set'>
361
<class 'set'>
362
<class 'set'>
363
<class 'set'>
364
<class 'set'>
365
<class 'set'>
366
<class 'set'>
367
<class 'set'>
368
<class 'set'>
369
<class 'set'>
370
<class 'set'>
371
<class 'set'>
372
<class 'set'>
373
<class 'set'>
374
<class 'set'>
375
<class 'set'>
376
<class 'set'>
377
<class 'set'>
378
<class 'set'>
379
<class 'set'>
380
<class 'set'>
381
<class 'set'>
382
<class 'set'>
383
<class 'set'>
384
<class 'set'>
385
<class 'set'>
386
<class 'set'>
387
<class 'set'>
388
<class 'set'>
389
<class 'set'>
390
<class 'set'>
391
<class 'set'>
392
<class 'set'>
393
<class 'set'>
394
<class 'set'>
395
<class 'set'>
396
<class 'set'>
397
<class 'set'>
398
<class 'set'>
399
<class 'set'>
400
<class 'set'>
401
<class 'set'>
402
<class 'set'>
403
<class 'set'>
404
<class 'set'>
405
405 Null Subs, user does not exist
406
<class 'set'>
407
<class 'set'>
408
<class 'set'>
409
<class 'set'>
410
<class 'set'>
411
<class 'set'>
412
<class 'set'>
413
<class 'set'>
414
<class 'set'>
415
<class 'set'>
416
<class 'set'>
417
<class 'set'>
418
418 Null Subs, user does not exist
419
<class 'set'>
420
<class 'set'>
421
<class 'set'>
422
<class 'set'>
423
<class 'set'>
424
<class 'set'>
425
<class 'set'>
426
<class 'set'>
427
<class 'set'>
428
<class 'set'>
429
<class 'set'>
430
<class 'set'>
431
<class 'set'>
432
<class 'set'>
433
<class 'set'>
434
<class 'set'>
435
<class 'set'>
436
<class 'set'>
437
<class 'set'>
438
<class 'set'>
439
<class 'set'>
440
<class 'set'>
441
<class 'set'>
442
<class 'set'>
443
<class 'set'>
444
<class 'set'>
445
<class 'set'>
446
<class 'set'>
447
<class 'set'>
448
<class 'set'>
449
<class 'set'>
450
<class 'set'>
451
<class 'set'>
452
<class 'set'>
453
<class 'set'>
454
<class 'set'>
455
<class 'set'>
456
<class 'set'>
457
<class 'set'>
458
<class 'set'>
459
<class 'set'>
460
<class 'set'>
461
<class 'set'>
462
<class 'set'>
463
<class 'set'>
464
464 Null Subs, user does not exist
465
<class 'set'>
466
<class 'set'>
467
<class 'set'>
468
<class 'set'>
469
<class 'set'>
470
<class 'set'>
471
<class 'set'>
472
<class 'set'>
473
<class 'set'>
474
<class 'set'>
475
<class 'set'>
476
<class 'set'>
477
<class 'set'>
478
<class 'set'>
479
<class 'set'>
480
<class 'set'>
481
481 Null Subs, user does not exist
482
<class 'set'>
483
<class 'set'>
484
<class 'set'>
485
<class 'set'>
486
<class 'set'>
487
<class 'set'>
488
<class 'set'>
489
<class 'set'>
490
<class 'set'>
491
<class 'set'>
492
<class 'set'>
493
<class 'set'>
494
<class 'set'>
495
<class 'set'>
496
<class 'set'>
497
497 Null Subs, user does not exist
498
<class 'set'>
499
<class 'set'>
500
<class 'set'>
501
<class 'set'>
502
<class 'set'>
503
<class 'set'>
504
<class 'set'>
505
<class 'set'>
506
<class 'set'>
507
<class 'set'>
508
<class 'set'>
509
<class 'set'>
510
<class 'set'>
511
<class 'set'>
512
<class 'set'>
513
<class 'set'>
514
<class 'set'>
515
<class 'set'>
516
<class 'set'>
517
<class 'set'>
518
<class 'set'>
519
<class 'set'>
520
<class 'set'>
521
<class 'set'>
522
<class 'set'>
523
<class 'set'>
524
<class 'set'>
525
<class 'set'>
526
<class 'set'>
527
<class 'set'>
528
<class 'set'>
529
<class 'set'>
530
<class 'set'>
531
<class 'set'>
532
<class 'set'>
533
<class 'set'>
534
<class 'set'>
535
<class 'set'>
536
<class 'set'>
537
<class 'set'>
538
<class 'set'>
539
<class 'set'>
540
<class 'set'>
541
541 Null Subs, user does not exist
542
<class 'set'>
543
<class 'set'>
544
<class 'set'>
545
<class 'set'>
546
<class 'set'>
547
<class 'set'>
548
<class 'set'>
549
<class 'set'>
550
<class 'set'>
551
<class 'set'>
552
<class 'set'>
553
<class 'set'>
554
<class 'set'>
555
<class 'set'>
556
<class 'set'>
557
<class 'set'>
558
558 Null Subs, user does not exist
559
<class 'set'>
560
<class 'set'>
561
<class 'set'>
562
<class 'set'>
563
<class 'set'>
564
<class 'set'>
565
<class 'set'>
566
<class 'set'>
567
<class 'set'>
568
<class 'set'>
569
<class 'set'>
570
<class 'set'>
571
<class 'set'>
572
<class 'set'>
573
<class 'set'>
574
<class 'set'>
575
<class 'set'>
576
<class 'set'>
577
<class 'set'>
578
<class 'set'>
579
<class 'set'>
580
<class 'set'>
581
<class 'set'>
582
<class 'set'>
583
<class 'set'>
584
<class 'set'>
585
<class 'set'>
586
<class 'set'>
587
587 Null Subs, user does not exist
588
<class 'set'>
589
<class 'set'>
590
<class 'set'>
591
<class 'set'>
592
592 Null Subs, user does not exist
593
<class 'set'>
594
<class 'set'>
595
595 Null Subs, user does not exist
596
<class 'set'>
597
<class 'set'>
598
<class 'set'>
599
<class 'set'>
600
<class 'set'>
601
<class 'set'>
602
<class 'set'>
603
<class 'set'>
604
<class 'set'>
605
<class 'set'>
606
<class 'set'>
607
<class 'set'>
608
<class 'set'>
609
<class 'set'>
610
<class 'set'>
611
<class 'set'>
612
<class 'set'>
613
<class 'set'>
614
<class 'set'>
615
<class 'set'>
616
<class 'set'>
617
<class 'set'>
618
<class 'set'>
619
<class 'set'>
620
<class 'set'>
621
<class 'set'>
622
<class 'set'>
623
<class 'set'>
624
<class 'set'>
625
<class 'set'>
626
<class 'set'>
627
<class 'set'>
628
<class 'set'>
629
<class 'set'>
630
<class 'set'>
631
<class 'set'>
632
<class 'set'>
633
<class 'set'>
634
<class 'set'>
635
<class 'set'>
636
<class 'set'>
637
<class 'set'>
638
<class 'set'>
639
639 Null Subs, user does not exist
640
<class 'set'>
641
<class 'set'>
642
<class 'set'>
643
<class 'set'>
644
<class 'set'>
645
<class 'set'>
646
<class 'set'>
647
<class 'set'>
648
<class 'set'>
649
<class 'set'>
650
<class 'set'>
651
<class 'set'>
###Markdown
We will use jaccard similarity to find similar subreddits. Jaccard similarity is computed by taking the intersections and dividing by the union of two sets. The function will take 4 arguments, the username, dbsubs, usersubs and the url. We wll return a list that has the username of the potential match, the similarity score and the recommendation link. We also have to check if the usersubs field is a set, since if we return a 'Null' string if the user doesn't exist when populating the list of subs.
###Code
def jaccardsimilarity(username,dbsubs,usersubs,link):
temp = []
temp.append(username)
if type(dbsubs)==set:
intersection = dbsubs.intersection(usersubs)
temp.append(intersection)
score = (len(intersection)/len(dbsubs.union(usersubs)))
temp.append(score)
temp.append(link)
return temp
else:
score = 0
temp.append('Not Active User')
temp.append(score)
return temp
###Output
_____no_output_____
###Markdown
Now we put it all together. We check to see if its a valid usersanme. Then we iterate through our dataframe. We create a temporary list and run it through our jaccard similarity function. We then sort it by the jaccard similarity cost (in descending order) and prune out all but the top 5 results. Then we just loop through the list and print our results.
###Code
def main(username):
cesc = []
try:
usersubs = getsubs(username)
except:
print("Not a valid username.")
return
for i in range(len(df)-1):
try:
cesc.append(jaccardsimilarity(df['User'][i],df['Subs'][i],usersubs,df['New Url'][i]))
except:
continue
# Sort the list by the 3rd item, the Jaccard similarity score
sortedlist = sorted(cesc,key=itemgetter(2),reverse=True)
# Return the top 5 results
sortedlist = sortedlist[:5]
print("Your reddit twins are:")
for i in sortedlist:
print('http://www.reddit.com/u/'+str(i[0]))
print('Similarity Score:',i[2])
print('The Subreddits you share are: ',i[1])
print('And here is some stuff they like. Check it out you may like it as well! ' ,i[3] )
# Just call main with your username as a string to run the recommender
username = 'abhi91'
main(username)
df.to_csv('redditproducts.csv')
###Output
_____no_output_____
###Markdown
** The main idea behind the generation of the Recommendation system is to recommend posts using both Collaborative and Content Based Filtering Methods. For Collaborative based filtering we will find out all the categories and find the cosine similarity using user-user collaborative filtering.**``` **DATA PREPROCESSING** UPLOADING DATASETS AND STORING THEM IN DATAFRAMES
###Code
import pandas as pd
from google.colab import files
import io
# upload posts.csv, users.csva and views .csv files to proceed
uploaded = files.upload()
# creating a user data_frame
users_df = pd.read_csv('users.csv')
users_df.head()
# creating posts data frame
posts_df = pd.read_csv('posts.csv')
posts_df.head()
# creating a views data frame
views_df = pd.read_csv('views.csv')
views_df.head()
###Output
_____no_output_____
###Markdown
DETAILS OF THE DATAFRAMES
###Code
users_df.describe()
views_df.describe()
posts_df.describe()
###Output
_____no_output_____
###Markdown
NULL VALUE IMPUTATION
###Code
users_df.isnull().any()
views_df.isnull().any()
posts_df.isnull().any()
###Output
_____no_output_____
###Markdown
**we have got null values in the category column of the posts dataset** **as its a categorical variable i have to fill the null values considering the title and post_type** **counting total number of null values in posts_df**
###Code
posts_df['category'].isna().sum()
###Output
_____no_output_____
###Markdown
**finding title and post_type for all 28 null values**
###Code
# using pd.isna() function to find null values
posts_df[posts_df['category'].isna()]
###Output
_____no_output_____
###Markdown
**As we can see most of the null values are for the post_types project we will create a new category called project and assign it to the null values and when ever a user leaves the category field empty we will assign the category field same as the post_type as default**
###Code
posts_df[posts_df['category'].isna()]='project'
###Output
_____no_output_____
###Markdown
**As we have imputed the null values lets check again**
###Code
posts_df.isna().any()
###Output
_____no_output_____
###Markdown
**Now lets see how many unique categories are present in the dataset**
###Code
posts_df['category'].unique()
###Output
_____no_output_____
###Markdown
PREPARING MORE DETAILED DATASET **As we can see some of the categories are combination of multiple categoris we will add more rows to the posts_df by splitting the categories and making one category for each row**
###Code
# creating a dictionary whcih will store actual categories as key and splitted
# categories as its value
categories = {}
# filling the dictionary
for i in posts_df['category']:
categories.update({i:[]})
for j in list(i.split("|")):
categories[i].append(j)
print(categories)
###Output
{'Plant Biotechnology': ['Plant Biotechnology'], 'Artificial Intelligence|Machine Learning|Information Technology': ['Artificial Intelligence', 'Machine Learning', 'Information Technology'], 'Operating Systems': ['Operating Systems'], 'Drawings': ['Drawings'], 'Competition Laws': ['Competition Laws'], 'Eco System': ['Eco System'], 'Economic Policies': ['Economic Policies'], 'Graphic|Graphic Design': ['Graphic', 'Graphic Design'], 'Painting': ['Painting'], 'Pen and ink': ['Pen and ink'], 'Computer Technology|Information Technology': ['Computer Technology', 'Information Technology'], 'Drawings|Painting': ['Drawings', 'Painting'], 'Graphic Design|Visual Arts|Illustration|Graphic': ['Graphic Design', 'Visual Arts', 'Illustration', 'Graphic'], 'Drawings|Calligraphy': ['Drawings', 'Calligraphy'], 'Photography': ['Photography'], 'Empowerment': ['Empowerment'], 'project': ['project'], 'Video editing': ['Video editing'], 'Inorganic Chemistry': ['Inorganic Chemistry'], 'Programming languages': ['Programming languages'], 'Conceptual|Graphic Design': ['Conceptual', 'Graphic Design'], 'HR Management': ['HR Management'], 'Human Resources|HR Management': ['Human Resources', 'HR Management'], 'Mass Media|International Relations': ['Mass Media', 'International Relations'], 'Sculptures|Artistic design': ['Sculptures', 'Artistic design'], 'Fashion Design|Ceramics|Artistic design': ['Fashion Design', 'Ceramics', 'Artistic design'], 'Craft|Artistic design': ['Craft', 'Artistic design'], 'Fashion Design|Visual Arts|Conceptual|Artistic design': ['Fashion Design', 'Visual Arts', 'Conceptual', 'Artistic design'], 'Photography|Fashion Design|Visual Arts|Graphic Design|Artistic design': ['Photography', 'Fashion Design', 'Visual Arts', 'Graphic Design', 'Artistic design'], 'Fashion Design|Visual Arts|Graphic Design|Artistic design|Graphic|Illustration': ['Fashion Design', 'Visual Arts', 'Graphic Design', 'Artistic design', 'Graphic', 'Illustration'], 'Mathematics|Linear Algebra': ['Mathematics', 'Linear Algebra'], 'Electronics & electrical Technology|Electrical Machines': ['Electronics & electrical Technology', 'Electrical Machines'], 'Auditing|Internal Audit': ['Auditing', 'Internal Audit'], 'E Commerce|E Transactions': ['E Commerce', 'E Transactions'], 'Computer Technology|Computation': ['Computer Technology', 'Computation'], 'Auditing|Internal Financial Control': ['Auditing', 'Internal Financial Control'], 'Taxation|Custom Laws': ['Taxation', 'Custom Laws'], 'Auditing|Audit Evidence': ['Auditing', 'Audit Evidence'], 'Taxation|GST': ['Taxation', 'GST'], 'Taxation|Direct Tax': ['Taxation', 'Direct Tax'], 'Auditing|Secratarial Audit': ['Auditing', 'Secratarial Audit'], 'Banking|Banking Companies': ['Banking', 'Banking Companies'], 'Banking|Banking Technology': ['Banking', 'Banking Technology'], 'Auditing|Audit Remark': ['Auditing', 'Audit Remark'], 'Auditing|Cost Audit': ['Auditing', 'Cost Audit'], 'Auditing|Statuary Audit': ['Auditing', 'Statuary Audit'], 'Computer Technology|Programming languages': ['Computer Technology', 'Programming languages'], 'Legal Studies|Legal System': ['Legal Studies', 'Legal System'], 'Sports Coaching|Sports Law': ['Sports Coaching', 'Sports Law'], 'Economics|Economics Sociology': ['Economics', 'Economics Sociology'], 'Economics|Revenue Concept': ['Economics', 'Revenue Concept'], 'Human Rights|Rights and Duties': ['Human Rights', 'Rights and Duties'], 'Sociology|Sociology of Religion': ['Sociology', 'Sociology of Religion'], 'Sociology|Sociology in India': ['Sociology', 'Sociology in India'], 'Political Science|International Politics': ['Political Science', 'International Politics'], 'Political Science|Colonialism In India': ['Political Science', 'Colonialism In India'], 'Legal Studies|Labor Law': ['Legal Studies', 'Labor Law'], 'Computer Technology|Design and Analysis of Algorithms|Programming languages': ['Computer Technology', 'Design and Analysis of Algorithms', 'Programming languages'], 'Computer Technology|Computer Application|Programming languages|Information Technology': ['Computer Technology', 'Computer Application', 'Programming languages', 'Information Technology'], 'Graphics|Computer Creation': ['Graphics', 'Computer Creation'], 'E Commerce|Other Online Platforms': ['E Commerce', 'Other Online Platforms'], 'Drawing': ['Drawing'], 'Drawings|Artistic design': ['Drawings', 'Artistic design'], 'Drawings|Artistic design|Illustration': ['Drawings', 'Artistic design', 'Illustration'], 'E Commerce|Shopping Platform|Other Online Platforms': ['E Commerce', 'Shopping Platform', 'Other Online Platforms'], 'Computer Technology|Hardware|Information Technology': ['Computer Technology', 'Hardware', 'Information Technology'], 'Computer Technology|Computation|Computer Application': ['Computer Technology', 'Computation', 'Computer Application'], 'Business|Business Skills': ['Business', 'Business Skills'], 'Computer Technology|Computer Application|Hardware|Information Technology': ['Computer Technology', 'Computer Application', 'Hardware', 'Information Technology'], 'Computer Technology|Computer Application': ['Computer Technology', 'Computer Application'], 'Computer Technology|Mobile Applications': ['Computer Technology', 'Mobile Applications'], 'Computer Technology|Computer Application|Operating Systems': ['Computer Technology', 'Computer Application', 'Operating Systems'], 'Computer Technology|Information Technology|Computer Application': ['Computer Technology', 'Information Technology', 'Computer Application'], 'Computer Technology|Computer Application|Information Technology': ['Computer Technology', 'Computer Application', 'Information Technology'], 'Communication|Basics of Communiaction': ['Communication', 'Basics of Communiaction'], 'Drawings|Painting|Visual Arts|Graphic Design|Prints|Illustration': ['Drawings', 'Painting', 'Visual Arts', 'Graphic Design', 'Prints', 'Illustration'], 'Drawings|Visual Arts|Painting|Graphic Design|Artistic design|Illustration': ['Drawings', 'Visual Arts', 'Painting', 'Graphic Design', 'Artistic design', 'Illustration'], 'Computer Technology|Artificial Intelligence': ['Computer Technology', 'Artificial Intelligence'], 'Marketing|Advertising': ['Marketing', 'Advertising'], 'Political Science|Political Thought': ['Political Science', 'Political Thought'], 'Accounting|Fundamental Of Accounting': ['Accounting', 'Fundamental Of Accounting'], 'Mixed Media': ['Mixed Media'], 'Management|Team Mangememnt': ['Management', 'Team Mangememnt'], 'Business|Corporate Social Responsibilities': ['Business', 'Corporate Social Responsibilities'], 'Philosophy|Applied Ethics': ['Philosophy', 'Applied Ethics'], 'Human Resources|Performance In Organization|Organizational Behaviour|HR Management': ['Human Resources', 'Performance In Organization', 'Organizational Behaviour', 'HR Management'], 'Fashion Desigining|Fashion Textile|Fashion Manufacturing|Fashion Techniques|Garment Production|Garment Technology': ['Fashion Desigining', 'Fashion Textile', 'Fashion Manufacturing', 'Fashion Techniques', 'Garment Production', 'Garment Technology'], 'Marketing|Promotion And Distribution Decisions': ['Marketing', 'Promotion And Distribution Decisions'], 'Biotechnology|Genetic Engineering': ['Biotechnology', 'Genetic Engineering'], 'Physiology|Neurology': ['Physiology', 'Neurology'], 'Mass Media|Indian Government': ['Mass Media', 'Indian Government'], 'Physiology|Osteology': ['Physiology', 'Osteology'], 'Physiology|Cardiology': ['Physiology', 'Cardiology'], 'Zoology|Ecology': ['Zoology', 'Ecology'], 'Computer Technology|Cloud Computing': ['Computer Technology', 'Cloud Computing'], 'Physiology|Radiology': ['Physiology', 'Radiology'], 'Computer Technology|Operating Systems': ['Computer Technology', 'Operating Systems'], 'Biotechnology|Molecular Biology': ['Biotechnology', 'Molecular Biology'], 'Physiology|Gastroenterology|Cardiology': ['Physiology', 'Gastroenterology', 'Cardiology'], 'Philosophy|Logic': ['Philosophy', 'Logic'], 'Watercolours': ['Watercolours'], 'Drawings|Watercolours': ['Drawings', 'Watercolours'], 'Mixed Media|Conceptual': ['Mixed Media', 'Conceptual'], 'Drawings|Painting|Watercolours': ['Drawings', 'Painting', 'Watercolours'], 'Visual Arts': ['Visual Arts'], 'Conceptual': ['Conceptual'], 'Tapestry': ['Tapestry'], 'Calligraphy': ['Calligraphy'], 'Human Resources|Performance In Organization': ['Human Resources', 'Performance In Organization'], 'Computer Technology|Robotics|Data Science|Information Technology|Artificial Intelligence': ['Computer Technology', 'Robotics', 'Data Science', 'Information Technology', 'Artificial Intelligence'], 'Philosophy|Public Philosophy': ['Philosophy', 'Public Philosophy'], 'Digital Marketing': ['Digital Marketing'], 'Mass Media|Videography': ['Mass Media', 'Videography'], 'Drawings|Painting|Visual Arts|Artistic design|Watercolours|Acrylics': ['Drawings', 'Painting', 'Visual Arts', 'Artistic design', 'Watercolours', 'Acrylics'], 'Biotechnology|Plant Biotechnology': ['Biotechnology', 'Plant Biotechnology'], 'Sports Coaching|Sports Event': ['Sports Coaching', 'Sports Event'], 'Political Science|Government and Politics': ['Political Science', 'Government and Politics'], 'Geography|Physical Geography': ['Geography', 'Physical Geography'], 'Computer Technology|Data Science': ['Computer Technology', 'Data Science'], 'Computer Technology|Machine Learning': ['Computer Technology', 'Machine Learning'], 'Graphic Design': ['Graphic Design'], 'Graphic Design|Visual Arts': ['Graphic Design', 'Visual Arts'], 'Education': ['Education'], 'Business|Business Organisation': ['Business', 'Business Organisation'], 'Zoology|Environmental Biology': ['Zoology', 'Environmental Biology'], 'Human Rights|Fundamental Rights': ['Human Rights', 'Fundamental Rights'], 'Computer Technology|Design and Analysis of Algorithms': ['Computer Technology', 'Design and Analysis of Algorithms'], 'Marketing|Marketing Management': ['Marketing', 'Marketing Management'], 'Accounting|Partnership Accounting|Corporate Accounting|Accounting Theory And Practices': ['Accounting', 'Partnership Accounting', 'Corporate Accounting', 'Accounting Theory And Practices'], 'Legal Studies|Income Tax Laws': ['Legal Studies', 'Income Tax Laws'], 'Graphics|Articulation|Computer Creation': ['Graphics', 'Articulation', 'Computer Creation'], 'Archeology|Human Prehistory': ['Archeology', 'Human Prehistory'], 'Business|Start Ups|Entreperneurship|Business Strategies|Venture Capitalist|Business Planning': ['Business', 'Start Ups', 'Entreperneurship', 'Business Strategies', 'Venture Capitalist', 'Business Planning'], 'Geography|Indian Geography': ['Geography', 'Indian Geography'], 'E Commerce|Shopping Platform|Other Online Platforms|Digital India': ['E Commerce', 'Shopping Platform', 'Other Online Platforms', 'Digital India'], 'Business|Bio-entrepreneurship': ['Business', 'Bio-entrepreneurship'], 'Literature|Stories': ['Literature', 'Stories'], 'History|Indian History': ['History', 'Indian History'], 'ViDEO': ['ViDEO'], 'Management|Business Management': ['Management', 'Business Management'], 'Visual Arts|Photography': ['Visual Arts', 'Photography'], 'Social Work|Health Education': ['Social Work', 'Health Education'], 'Social Work|Social Tech': ['Social Work', 'Social Tech'], 'Social Work|Humanities': ['Social Work', 'Humanities'], 'Photography|Visual Arts': ['Photography', 'Visual Arts'], 'Accounting|Financial Accounting': ['Accounting', 'Financial Accounting'], 'Psycholgy|Social Psychology': ['Psycholgy', 'Social Psychology'], 'Photography|Conceptual': ['Photography', 'Conceptual'], 'Physics|Instrumentation': ['Physics', 'Instrumentation'], 'Craft work': ['Craft work'], 'Social Work|Social Interventions|Substance Abuse': ['Social Work', 'Social Interventions', 'Substance Abuse'], 'Economics|Indian Economy': ['Economics', 'Indian Economy'], 'Social Work|Substance Abuse|Social Interventions|Humanities': ['Social Work', 'Substance Abuse', 'Social Interventions', 'Humanities'], 'Social Work|Social Interventions|Substance Abuse|Health Education': ['Social Work', 'Social Interventions', 'Substance Abuse', 'Health Education'], 'Social Work|Humanities|Social Interventions': ['Social Work', 'Humanities', 'Social Interventions'], 'Finance|Financial Analysis': ['Finance', 'Financial Analysis'], 'Architecture': ['Architecture'], 'Photography|Architecture|Painting': ['Photography', 'Architecture', 'Painting'], 'Computer Technology|Cloud Computing|Artificial Intelligence|Information Technology|Programming languages': ['Computer Technology', 'Cloud Computing', 'Artificial Intelligence', 'Information Technology', 'Programming languages'], 'Computer Technology|Design and Analysis of Algorithms|Web designing|Database Management|Artificial Intelligence': ['Computer Technology', 'Design and Analysis of Algorithms', 'Web designing', 'Database Management', 'Artificial Intelligence'], 'Literature|Movements in Literature': ['Literature', 'Movements in Literature'], 'Physics|Atomic Physics|Energy Physics|Quantum Mecahnics|Instrumentation': ['Physics', 'Atomic Physics', 'Energy Physics', 'Quantum Mecahnics', 'Instrumentation'], 'Business|Business Strategies|Venture Capitalist': ['Business', 'Business Strategies', 'Venture Capitalist'], 'Photography|Architecture|Illustration': ['Photography', 'Architecture', 'Illustration'], 'Photography|Architecture': ['Photography', 'Architecture'], 'Human Rights|Fundamental Rights|Protection & Enforcement': ['Human Rights', 'Fundamental Rights', 'Protection & Enforcement'], 'Environment Studies|Pollution|Environmental Biology': ['Environment Studies', 'Pollution', 'Environmental Biology'], 'Photography|Conceptual|Illustration': ['Photography', 'Conceptual', 'Illustration'], 'Physics|Quantum Mecahnics|Industrial Instrumentation|Atomic Physics|Nuclear & Particle Physics|Energy Physics': ['Physics', 'Quantum Mecahnics', 'Industrial Instrumentation', 'Atomic Physics', 'Nuclear & Particle Physics', 'Energy Physics'], 'Biotechnology|Environmental Biotechnology': ['Biotechnology', 'Environmental Biotechnology'], 'Legal Studies|Alternate Dispute Resolution|International Treaties|International Agreement': ['Legal Studies', 'Alternate Dispute Resolution', 'International Treaties', 'International Agreement'], 'Marketing|Principles Of Marketing|Marketing Research Methadology|Marketing Management|International Marketing': ['Marketing', 'Principles Of Marketing', 'Marketing Research Methadology', 'Marketing Management', 'International Marketing'], 'Sculptures': ['Sculptures'], 'Psycholgy|Child Development': ['Psycholgy', 'Child Development'], 'Economics|Economic Policies|Break even Point|Foreign Economy': ['Economics', 'Economic Policies', 'Break even Point', 'Foreign Economy'], 'Psycholgy|Psychological Growth': ['Psycholgy', 'Psychological Growth'], 'Management|Business Management|Technology Mangement|Managerial Activity|Creative And Lateral Management': ['Management', 'Business Management', 'Technology Mangement', 'Managerial Activity', 'Creative And Lateral Management'], 'Business': ['Business'], 'Business|Business Strategies|Business Enviorment|New Venture Planning|Foreign Business|Business Organisation': ['Business', 'Business Strategies', 'Business Enviorment', 'New Venture Planning', 'Foreign Business', 'Business Organisation'], 'Marketing|Principles Of Marketing|International Marketing|Promotion And Distribution Decisions': ['Marketing', 'Principles Of Marketing', 'International Marketing', 'Promotion And Distribution Decisions'], 'Social Work|NGO': ['Social Work', 'NGO'], 'Fine Arts|Painting': ['Fine Arts', 'Painting'], 'Legal Studies|Legal System|Legal Tradition|Nationality Law|Government Law': ['Legal Studies', 'Legal System', 'Legal Tradition', 'Nationality Law', 'Government Law'], 'Business|Entreperneurship|Venture Capitalist|Business Enviorment|Start Ups|Business Planning|New Venture Planning': ['Business', 'Entreperneurship', 'Venture Capitalist', 'Business Enviorment', 'Start Ups', 'Business Planning', 'New Venture Planning'], 'Legal Studies|Company Law|Legal System|Banking Law': ['Legal Studies', 'Company Law', 'Legal System', 'Banking Law'], 'Computer Technology|Web designing|Artificial Intelligence|Computer Application|Machine Learning|Frontend Development': ['Computer Technology', 'Web designing', 'Artificial Intelligence', 'Computer Application', 'Machine Learning', 'Frontend Development'], 'Music': ['Music'], 'Sculptures|Wood Crafts|Craft|Wood carving': ['Sculptures', 'Wood Crafts', 'Craft', 'Wood carving'], 'Fashion Design': ['Fashion Design'], 'Painting|Craft|Artistic design': ['Painting', 'Craft', 'Artistic design'], '2D Composition|Watercolours|Painting': ['2D Composition', 'Watercolours', 'Painting'], 'Painting|Watercolours|2D Composition': ['Painting', 'Watercolours', '2D Composition'], 'Visual Arts|Craft|Conceptual': ['Visual Arts', 'Craft', 'Conceptual'], 'Painting|Visual Arts|Craft|Conceptual': ['Painting', 'Visual Arts', 'Craft', 'Conceptual'], 'Painting|Craft|Mixed Media|2D Composition': ['Painting', 'Craft', 'Mixed Media', '2D Composition'], 'Visual Arts|Craft|Mixed Media|Conceptual|Mosaic painting': ['Visual Arts', 'Craft', 'Mixed Media', 'Conceptual', 'Mosaic painting'], 'Craft|Drawings|Conceptual|2D Composition': ['Craft', 'Drawings', 'Conceptual', '2D Composition'], 'Craft|Mixed Media|Conceptual|Mosaic painting|2D Composition': ['Craft', 'Mixed Media', 'Conceptual', 'Mosaic painting', '2D Composition'], 'Video': ['Video'], 'Fashion Desigining|Fashion Illustration|Pattern Cutting|Fashion Communication': ['Fashion Desigining', 'Fashion Illustration', 'Pattern Cutting', 'Fashion Communication'], 'Biotechnology|Animal Biotechnology': ['Biotechnology', 'Animal Biotechnology'], 'Fashion Desigining|Pattern & Culture|Fashion Trends|Fashion Portfoilio': ['Fashion Desigining', 'Pattern & Culture', 'Fashion Trends', 'Fashion Portfoilio'], 'Sketch Video': ['Sketch Video'], 'Fashion Desigining|Pattern & Culture|Fashion Textile|Fashion Trends': ['Fashion Desigining', 'Pattern & Culture', 'Fashion Textile', 'Fashion Trends'], 'Test': ['Test'], 'Professionalism': ['Professionalism'], 'Art': ['Art'], 'Science;Technology': ['Science;Technology'], 'Art; Science': ['Art; Science'], 'Fashion Design|Illustration|Watercolours|Drawings': ['Fashion Design', 'Illustration', 'Watercolours', 'Drawings'], 'Drawings|Fashion Design|Illustration|Watercolours': ['Drawings', 'Fashion Design', 'Illustration', 'Watercolours'], 'Fashion Design|Illustration|Watercolours': ['Fashion Design', 'Illustration', 'Watercolours'], 'Drawings|Fashion Design|Mixed Media|Conceptual|Illustration|Pen and ink|Watercolours': ['Drawings', 'Fashion Design', 'Mixed Media', 'Conceptual', 'Illustration', 'Pen and ink', 'Watercolours'], 'Fashion Design|Illustration|Conceptual|Watercolours|Pen and ink': ['Fashion Design', 'Illustration', 'Conceptual', 'Watercolours', 'Pen and ink'], 'Artistic design|Logo Design|Graphic|Illustration': ['Artistic design', 'Logo Design', 'Graphic', 'Illustration'], 'Visual Arts|Graphic Design|Artistic design|Graphic|Illustration': ['Visual Arts', 'Graphic Design', 'Artistic design', 'Graphic', 'Illustration'], 'Learning': ['Learning'], 'Photography|Architecture|Visual Arts|Graphic Design': ['Photography', 'Architecture', 'Visual Arts', 'Graphic Design'], 'Photography|Architecture|Visual Arts|Graphic Design|Artistic design|Graphic|Logo Design': ['Photography', 'Architecture', 'Visual Arts', 'Graphic Design', 'Artistic design', 'Graphic', 'Logo Design'], 'Visual Arts|Graphic Design|2D Composition|Logo Design': ['Visual Arts', 'Graphic Design', '2D Composition', 'Logo Design'], 'Artistic design': ['Artistic design'], 'Literature|Stories|Fictions|Movements in Literature': ['Literature', 'Stories', 'Fictions', 'Movements in Literature'], 'Technology': ['Technology'], 'Computer Technology|Computation|Computer Application|Cloud Computing': ['Computer Technology', 'Computation', 'Computer Application', 'Cloud Computing'], 'Visual Arts|Calligraphy|Pen and ink': ['Visual Arts', 'Calligraphy', 'Pen and ink'], 'Mixed Media|Calligraphy|Pen and ink': ['Mixed Media', 'Calligraphy', 'Pen and ink'], 'Typography|Pen and ink': ['Typography', 'Pen and ink'], 'Typography|Calligraphy|Pen and ink': ['Typography', 'Calligraphy', 'Pen and ink'], 'Calligraphy|Typography|Pen and ink': ['Calligraphy', 'Typography', 'Pen and ink'], 'Calligraphy|Pen and ink': ['Calligraphy', 'Pen and ink'], 'Mass Media|Media And Society': ['Mass Media', 'Media And Society'], 'Science; Technology': ['Science; Technology']}
###Markdown
**Now lets update posts_df**
###Code
# we will prepare a list which will store all the new rows and
# column values for now
updated_data = []
for i in categories:
# create a dummy dataframe which will store row values for the current
# category
dummy = posts_df[posts_df['category']==i]
id = dummy['_id'].values[0]
title = dummy['title'].values[0]
post_type = dummy[' post_type'].values[0]
for j in categories[i]:
# now we will create a dictionary which will store the row values of the
# main category and will be assigned to all of its splitted category
dict1 = {}
dict1.update({'_id':id})
dict1.update({'title':title})
dict1.update({'category':j})
dict1.update({' post_type':post_type})
updated_data.append(dict1)
posts_df_updated = pd.DataFrame(updated_data)
###Output
_____no_output_____
###Markdown
**now lets check updated posts_df**
###Code
posts_df_updated.loc[:, ['_id', 'category', ' post_type']].head()
posts_df.loc[:, ['_id', 'category', ' post_type']].head()
posts_df_updated.describe()
###Output
_____no_output_____
###Markdown
**As we can see we have successfully created our posts dataframe now we will merge the dataframes** MERGING OF THE DATAFRAMES **We will merge views_df and posts_df_updated dataframes as they conatin all the columns and data that we will need for our recommendation system**
###Code
views_df.columns
posts_df_updated.columns
###Output
_____no_output_____
###Markdown
**As we know for merging we need a common column and post_id is the column in the both dataframes but we have to rename one of them to perform the merge operation****We will change rename _id to post id in the posts_df_updated column**
###Code
# using pd.rename function to rename _id to post_id
posts_df_updated.rename(columns={'_id':'post_id'}, inplace=True)
posts_df_updated.columns
# using pd.merge function to merge two dataframes
main_df = pd.merge(views_df, posts_df_updated)
main_df.columns
main_df.head(10)
main_df.tail(20)
main_df.describe()
###Output
_____no_output_____
###Markdown
**Now the merging is done and we have our actuall dataframe to work on now lets move to the nect step which is filtering using Collaborative Filtering** **Collaborative Filtering** **So we are using user-user collaberative filtering. And we are using cosine similarity to determine the similarity between users. For that we need to build a matrix. categories will be the columns and user_ids will be the rows. And we are using K near neighbour model to determine the similar users.** CONSTRUCTING THE MATRIX **We will create a sparse matrix of mXn. All the unique categories will be the column values and user_ids will be the row value. And the cell values will be the number times a user has viwed a category. The higher the cell value the higher the users interest for that particular category** **Lets find all the unique user_ids and categories**
###Code
users = list(main_df['user_id'].unique())
len(users)
categories = list(main_df['category'].unique())
len(categories)
###Output
_____no_output_____
###Markdown
**Our matrix size will be of size 88X234**
###Code
# craeting the matrix
user_mat = [[] for i in range(len(users))]
for i in range(len(users)):
for j in range(len(categories)):
# finding how many rows present with current user_id and current
# category
value = len(main_df[
(main_df['user_id']==users[i])&
(main_df['category']==categories[j])
])
user_mat[i].append(value)
for i in user_mat[0]:
print(i, end=" ")
###Output
4 3 3 3 2 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 0 0 0 0 1 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
###Markdown
**Now we will convert the list to a actual sparse matrix using scipy.sparse library**
###Code
# importing csr_matrix from scipy.sparse
from scipy.sparse import csr_matrix # for creating a sparse matrix
user_mat = csr_matrix(user_mat)
user_mat
###Output
_____no_output_____
###Markdown
**As we have made our matrix lets train the model.** TRAINING THE MODEL **So we are using K nearest neighbour model. And the similarity constraint we are using is cosine similarity.** **Importing the model**
###Code
# importing NearestNeighbors from sklearn.neighbors
from sklearn.neighbors import NearestNeighbors
# setting the parameters
model_knn = NearestNeighbors(
metric='cosine', algorithm='brute', n_neighbors=15)
# fitting the model
model_knn.fit(user_mat)
###Output
_____no_output_____
###Markdown
CREATING A FUNCTION FOR RECOMMENDATION **We have trained our model. Now lets make a function which will take users_id as input and recommend categories that the user would like**
###Code
# creating a function which will recommend us posts with argument as
# user_id and other default arguments
def recommender(user_id, data=user_mat, model=model_knn):
model.fit(data)
index = users.index(user_id)
current_user = main_df[main_df['user_id']==user_id]
distances, indices = model.kneighbors(data[index], 15)
recomendation = []
for i in indices[0]:
user = main_df[main_df['user_id']==users[i]]
for i in user['category'].unique():
if i not in current_user['category'].unique():
recomendation.append(i)
print(recomendation)
print(indices)
###Output
_____no_output_____
###Markdown
**What we did above there is first we found out 15 similar users for our current user and we recomended those posts to the user which he/she hasnot viewed yet but similar users have.Now lets test our function for a user** **Lets see what the system recomendation for the first user. But before that lets see the users categories that he has already viewed**
###Code
main_df[main_df['user_id']==users[0]]['category'].unique()
###Output
_____no_output_____
###Markdown
**These are the categories the user has viewed now lets see the recomendations for our user**
###Code
recommender(users[0])
###Output
['Fashion Portfoilio', 'Fashion Design', 'Watercolours', 'Drawings', 'Sketch Video', 'Political Science', 'Colonialism In India', 'Legal Studies', 'Labor Law', 'Banking', 'Banking Companies', 'Professionalism', 'Fashion Portfoilio', 'Painting', 'Craft', 'Technology', 'Drawings', 'Business', 'Learning', 'Literature', 'Stories', 'Fictions', 'Movements in Literature', 'Fashion Portfoilio', 'Business Strategies', 'Business Enviorment', 'New Venture Planning', 'Foreign Business', 'Business Organisation', 'Fashion Design', 'Watercolours', 'Conceptual', 'Pen and ink', 'Mixed Media', '2D Composition', 'Communication', 'Basics of Communiaction', 'Sketch Video', 'Professionalism', 'Biotechnology', 'Molecular Biology', 'Fine Arts', 'Painting', 'Management', 'Business Management', 'Technology Mangement', 'Managerial Activity', 'Creative And Lateral Management', 'Economics', 'Economic Policies', 'Break even Point', 'Foreign Economy', 'Computer Technology', 'Operating Systems', 'Fashion Manufacturing', 'Fashion Techniques', 'Garment Production', 'Garment Technology', 'Digital Marketing', 'Learning', 'Fashion Design', 'Watercolours', 'Drawings', 'Conceptual', 'Pen and ink', 'Mixed Media', 'Political Science', 'Colonialism In India', 'Legal Studies', 'Labor Law', 'Banking', 'Banking Companies', 'Auditing', 'Statuary Audit', 'Legal System', 'Sports Coaching', 'Sports Law', 'Taxation', 'Custom Laws', 'Economics', 'Revenue Concept', 'Human Rights', 'Rights and Duties', 'Sociology', 'Sociology of Religion', 'Sociology in India', 'International Politics', 'Mass Media', 'Videography', 'Biotechnology', 'Plant Biotechnology', 'Painting', 'Technology', 'Digital Marketing', 'Business', 'Learning', 'Literature', 'Stories', 'Fictions', 'Movements in Literature', 'Business Strategies', 'Business Enviorment', 'New Venture Planning', 'Foreign Business', 'Business Organisation', 'Conceptual', 'Fashion Design', 'Watercolours', 'Drawings', 'Pen and ink', 'Mixed Media', '2D Composition', 'Sketch Video', 'Professionalism', 'Philosophy', 'Public Philosophy', 'Test', 'Applied Ethics', 'Entreperneurship', 'Venture Capitalist', 'Start Ups', 'Business Planning', 'Painting', 'Prints', 'Biotechnology', 'Animal Biotechnology', 'Craft', 'Mosaic painting', 'Social Work', 'Humanities', 'Social Interventions', 'Sculptures', 'Marketing', 'Principles Of Marketing', 'Marketing Research Methadology', 'Marketing Management', 'International Marketing', 'Music', 'Fine Arts', 'Promotion And Distribution Decisions', 'Video', 'Legal Studies', 'Legal System', 'Legal Tradition', 'Nationality Law', 'Government Law', 'Plant Biotechnology', 'Management', 'Business Management', 'Technology Mangement', 'Managerial Activity', 'Creative And Lateral Management', 'Human Rights', 'Fundamental Rights', 'Protection & Enforcement', 'NGO', 'Graphics', 'Articulation', 'Computer Creation', 'Psycholgy', 'Psychological Growth', 'Physics', 'Quantum Mecahnics', 'Industrial Instrumentation', 'Atomic Physics', 'Nuclear & Particle Physics', 'Energy Physics', 'Substance Abuse', 'Environmental Biotechnology', 'Computer Technology', 'Design and Analysis of Algorithms', 'Web designing', 'Database Management', 'Artificial Intelligence', 'Health Education', 'Social Psychology', 'Economics', 'Indian Economy', 'Instrumentation', 'Craft work', 'Acrylics', 'Accounting', 'Financial Accounting', 'ViDEO', 'Archeology', 'Human Prehistory', 'Mass Media', 'Videography', 'Sports Coaching', 'Sports Event', 'Machine Learning', 'Information Technology', 'Partnership Accounting', 'Corporate Accounting', 'Accounting Theory And Practices', 'Zoology', 'Environmental Biology', 'Physiology', 'Osteology', 'Education', 'Fashion Manufacturing', 'Fashion Techniques', 'Garment Production', 'Garment Technology', 'Political Science', 'Government and Politics', 'Gastroenterology', 'Cardiology', 'Logic', 'Human Resources', 'Performance In Organization', 'Robotics', 'Data Science', 'International Relations', 'Tapestry', 'Technology', 'Competition Laws', 'Digital Marketing', 'Fashion Portfoilio', 'Fashion Design', 'Conceptual', 'Painting', 'Craft', 'Biotechnology', 'Animal Biotechnology', 'Mixed Media', 'Mosaic painting', '2D Composition', 'Drawings', 'Sculptures', 'Wood Crafts', 'Wood carving', 'Watercolours', 'Ceramics', 'Music', 'Fine Arts', 'Legal Studies', 'Company Law', 'Legal System', 'Banking Law', 'Computer Technology', 'Web designing', 'Artificial Intelligence', 'Computer Application', 'Machine Learning', 'Frontend Development', 'Social Work', 'NGO', 'Graphics', 'Articulation', 'Computer Creation', 'Physics', 'Quantum Mecahnics', 'Industrial Instrumentation', 'Atomic Physics', 'Nuclear & Particle Physics', 'Energy Physics', 'Psycholgy', 'Child Development', 'Alternate Dispute Resolution', 'International Treaties', 'International Agreement', 'Economics', 'Economic Policies', 'Break even Point', 'Foreign Economy', 'Environment Studies', 'Pollution', 'Environmental Biology', 'Business', 'Business Organisation', 'Acrylics', 'ViDEO', 'E Commerce', 'Shopping Platform', 'Other Online Platforms', 'Digital India', 'Bio-entrepreneurship', 'History', 'Indian History', 'Income Tax Laws', 'Mass Media', 'Videography', 'Marketing', 'Marketing Management', 'Design and Analysis of Algorithms', 'Zoology', 'Education', 'Plant Biotechnology', 'Robotics', 'Data Science', 'Information Technology', 'Physiology', 'Radiology', 'Fashion Design', 'Watercolours', 'Drawings', 'Conceptual', 'Pen and ink', 'Mixed Media', 'Painting', 'Craft', 'Mosaic painting', '2D Composition', 'Sculptures', 'Wood Crafts', 'Wood carving', 'Ceramics', 'Fine Arts', 'Fashion Manufacturing', 'Fashion Techniques', 'Garment Production', 'Garment Technology', 'Technology', 'Fashion Design', 'Watercolours', 'Drawings', 'Conceptual', 'Pen and ink', 'Mixed Media', 'Computer Technology', 'Cloud Computing', 'Human Rights', 'Fundamental Rights', 'Biotechnology', 'Plant Biotechnology', 'Mass Media', 'Indian Government', 'Fashion Design', 'Watercolours', 'Drawings', 'Conceptual', 'Pen and ink', 'Mixed Media', 'Physiology', 'Radiology', 'Drawings', 'Painting', 'Watercolours', 'Acrylics', 'Human Resources', 'Performance In Organization', 'Organizational Behaviour', 'HR Management']
[[ 0 1 2 32 3 42 8 15 16 73 37 4 6 75 70]]
###Markdown
**NOTE: We are suggesting what are the new categories the user would like we can also modify the recommend function to suggest Posts also.** **Looks like the system is suggesting posts of users interest and also some new categories for the user.****The List below suggests the similar users to the current user.** **So This ends our Collaborative Filtering. Lets move on to Content based filtering** CONTENT BASED FILTERING **Now for content based filtering we will prepare item profiles for each category and user profile for each user.** PREPARING ITEM-PROFILES **We have got 234 unique categories. Each category will have its own item-profile. These item-profiles are vectors of length equal to number of posts we have and it will contain binary values depending upon whether the category is present in a post or not.**
###Code
main_df['post_id'].describe()
###Output
_____no_output_____
###Markdown
**We will have 234 vectors, each of length 231 containing 0s and 1s**
###Code
import numpy as np
# Lets create a list which will contain all the posts
posts = list(main_df['post_id'].unique())
# Create a dictionary which will store each category as key and their item-profile vector as value
item_profiles = {}
# filling item_profiles
for i in categories:
item_profiles.update({i:[]})
for j in posts:
item_profiles[i].append(1) if i in list(main_df[main_df['post_id']==j]['category'].unique()) else item_profiles[i].append(0)
# converting lists to vectors or arrays
for i in item_profiles:
item_profiles[i] = np.array(item_profiles[i])
for i in item_profiles:
print(i, item_profiles[i])
###Output
Visual Arts [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0]
Graphic Design [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0]
Artistic design [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0]
Graphic [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Illustration [1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0
0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0]
Science; Technology [0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Computer Technology [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 1 1 0 1 0 0 0 0 0 0 1 1 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0]
Computation [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Computer Application [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Cloud Computing [0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Technology [0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Calligraphy [0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0]
Typography [0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Pen and ink [0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Mixed Media [0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Mass Media [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0]
Media And Society [0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Competition Laws [0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Drawings [0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0]
Marketing [0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Advertising [0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Promotion And Distribution Decisions [0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Digital Marketing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Learning [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Photography [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 1 1 0
0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0]
Architecture [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Logo Design [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Literature [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Stories [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fictions [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Movements in Literature [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Art; Science [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Desigining [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Pattern & Culture [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Trends [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Portfoilio [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Video editing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Strategies [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Enviorment [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
New Venture Planning [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Foreign Business [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Organisation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Conceptual [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0]
Fashion Design [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Watercolours [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0]
Operating Systems [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
2D Composition [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Science;Technology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Information Technology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0]
Hardware [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Drawing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Communication [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Basics of Communiaction [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sketch Video [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Political Science [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0]
Colonialism In India [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Legal Studies [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Labor Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Banking [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Banking Companies [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Auditing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 1 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Statuary Audit [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Legal System [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sports Coaching [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sports Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Taxation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Custom Laws [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Economics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Revenue Concept [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Human Rights [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Rights and Duties [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sociology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sociology of Religion [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sociology in India [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
International Politics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Internal Financial Control [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Internal Audit [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Audit Evidence [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
GST [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Direct Tax [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Secratarial Audit [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Banking Technology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Audit Remark [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Cost Audit [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Economics Sociology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Professionalism [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Textile [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Illustration [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Pattern Cutting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Communication [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Programming languages [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Art [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Philosophy [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0]
Public Philosophy [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Test [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Applied Ethics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Painting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0]
Craft [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Entreperneurship [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Venture Capitalist [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Start Ups [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Planning [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Prints [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Biotechnology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Animal Biotechnology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Mosaic painting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Social Work [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Humanities [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Social Interventions [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Skills [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Inorganic Chemistry [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sculptures [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Principles Of Marketing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Marketing Research Methadology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Marketing Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
International Marketing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Molecular Biology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Wood Crafts [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Wood carving [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Ceramics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Music [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fine Arts [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Electronics & electrical Technology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Electrical Machines [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
HR Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Mathematics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Linear Algebra [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Human Resources [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0]
Video [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Legal Tradition [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Nationality Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Government Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Company Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Banking Law [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Web designing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Artificial Intelligence [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0]
Machine Learning [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Frontend Development [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Plant Biotechnology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Business Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Technology Mangement [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Managerial Activity [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Creative And Lateral Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fundamental Rights [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Protection & Enforcement [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
NGO [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Eco System [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Graphics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Articulation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Computer Creation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Psycholgy [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Psychological Growth [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Physics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Quantum Mecahnics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Industrial Instrumentation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Atomic Physics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Nuclear & Particle Physics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Energy Physics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Substance Abuse [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Zoology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Ecology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Physiology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 1 0 0 0 0 0]
Cardiology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0]
Child Development [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Political Thought [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Alternate Dispute Resolution [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
International Treaties [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
International Agreement [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Geography [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Physical Geography [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Genetic Engineering [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Economic Policies [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Break even Point [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Foreign Economy [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Corporate Social Responsibilities [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Environment Studies [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Pollution [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Environmental Biology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Team Mangememnt [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Environmental Biotechnology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Indian Geography [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Design and Analysis of Algorithms [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Database Management [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Instrumentation [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Finance [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Financial Analysis [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Health Education [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Social Psychology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Indian Economy [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Craft work [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Acrylics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Accounting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Financial Accounting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
ViDEO [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Performance In Organization [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0]
Organizational Behaviour [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Social Tech [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Radiology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
E Commerce [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Shopping Platform [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Other Online Platforms [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Digital India [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Bio-entrepreneurship [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
History [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Indian History [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Archeology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Human Prehistory [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Income Tax Laws [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Videography [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Sports Event [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Partnership Accounting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Corporate Accounting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Accounting Theory And Practices [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
E Transactions [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Mobile Applications [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Osteology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Education [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Manufacturing [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fashion Techniques [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Garment Production [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Garment Technology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Fundamental Of Accounting [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Indian Government [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0]
Data Science [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0 0]
Government and Politics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0]
Gastroenterology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0]
Logic [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 0 0]
Robotics [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0]
Neurology [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0]
Empowerment [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0]
International Relations [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0]
Tapestry [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1]
###Markdown
PREPARING THE USER-PROFILE **We will prepare the user profile from the item profiles that we have already prepared. Lets say a user has viewed posts which have n different categories and let I1,I2....In be their item-profiles. User profile for a user will be equal to the weghited average of the profiles. For the weight calculation we will divide the number of times a category has appeared in all the posts the user have viewed by the total number of pots the user has viewed.**
###Code
# we will create a dictionary user_id as key and user_profile as value
user_profiles = {}
# Filling the user_profiles
for i in users:
user_profiles.update({i:[]})
# finding the current user
current_user = main_df[main_df['user_id']==i]
# listing all the categories the user has viewed
current_user_categories = list(current_user['category'].unique())
# Listing all the posts the user has viewed
current_user_post = list(current_user['post_id'].unique())
# Now we will find the weights for each category in the current_user_categories
# create a dictionary category as key and their weight as value
category_weight = {}
# create a dummy vector which will store the final vector for the user profile of length equal to posts
result_vector = np.array([0 for i in range(len(posts))])
for j in current_user_categories:
category_weight.update({j:0})
# Now count how many times j has appeared
for k in list(current_user['category']):
if j==k:
category_weight[j]+=1
# Now divide it with the length of the current_user_post
category_weight[j] = category_weight[j]/len(current_user_post)
# Now we have calculated our weights, Now we will calculate user-profile
result_vector = result_vector+ (category_weight[j]*item_profiles[j])
user_profiles[i] = result_vector/len(current_user_post)
for i in user_profiles:
if i in users[0:5]:
print(i, user_profiles[i])
print()
###Output
5df49b32cc709107827fb3c7 [0.12396694 0. 0. 0. 0.00826446 0.00826446
0. 0.00826446 0.00826446 0. 0. 0.
0. 0. 0. 0. 0. 0.04132231
0. 0.15702479 0. 0.00826446 0.09090909 0.02479339
0.03305785 0. 0. 0.02479339 0.01652893 0.01652893
0.01652893 0.01652893 0. 0.05785124 0.07438017 0.
0.00826446 0. 0. 0. 0. 0.
0.08264463 0.01652893 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.04132231 0.04132231 0. 0.00826446 0.
0. 0. 0.02479339 0.00826446 0. 0.07438017
0. 0. 0.03305785 0. 0. 0.
0. 0.09917355 0. 0.09917355 0. 0.
0.03305785 0.03305785 0. 0. 0.02479339 0.02479339
0. 0. 0. 0. 0.03305785 0.02479339
0.12396694 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.01652893
0.03305785 0. 0. 0. 0. 0.03305785
0. 0.04958678 0.03305785 0.04958678 0. 0.
0. 0. 0. 0.01652893 0. 0.
0. 0. 0. 0.05785124 0.04958678 0.
0. 0. 0. 0.04958678 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.05785124
0. 0. 0. 0. 0. 0.
0. 0.02479339 0. 0. 0. 0.04132231
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.02479339 0. 0. 0.02479339 0.
0. 0. 0. 0. 0. 0.09917355
0. 0. 0.00826446 0. 0. 0.
0.01652893 0. 0. ]
5ec3ba5374f7660d73aa1201 [0.0625 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.0078125
0. 0. 0. 0. 0. 0.01171875
0. 0.0703125 0. 0.00390625 0.0390625 0.01171875
0.03125 0. 0. 0.01171875 0.03125 0.0390625
0.03125 0.0390625 0. 0.03125 0.03125 0.
0.00390625 0. 0. 0. 0. 0.00390625
0.046875 0.0390625 0.0078125 0.0078125 0.0078125 0.
0.00390625 0. 0. 0. 0. 0.
0. 0.00390625 0. 0. 0. 0.
0. 0. 0.00390625 0. 0. 0.
0.00390625 0.03125 0.0234375 0. 0. 0.
0. 0. 0.01171875 0. 0. 0.046875
0. 0. 0.01171875 0.0078125 0. 0.
0. 0.05078125 0.0078125 0.05078125 0. 0.
0.01171875 0.01171875 0. 0. 0.01171875 0.01171875
0. 0.0078125 0.0078125 0.0078125 0.01171875 0.01953125
0.0703125 0. 0. 0. 0. 0.
0. 0. 0. 0.00390625 0.00390625 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.00390625 0.00390625 0.
0. 0. 0. 0. 0. 0.0078125
0.0234375 0. 0. 0. 0. 0.015625
0. 0.03125 0.015625 0.0234375 0. 0.
0. 0. 0. 0.0078125 0. 0.
0. 0. 0. 0.0390625 0.01953125 0.
0. 0. 0. 0.01953125 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0.00390625 0. 0.
0. 0. 0. 0.015625 0. 0.0234375
0. 0. 0. 0. 0. 0.
0. 0.01953125 0. 0. 0. 0.03515625
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.015625 0.
0. 0.015625 0. 0. 0.01171875 0.
0. 0.00390625 0. 0. 0.0078125 0.05859375
0. 0. 0.0078125 0. 0. 0.
0.0078125 0. 0. ]
5ec2204374f7660d73aa100f [5. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 4. 0. 0. 2. 1.
0. 0. 0. 1. 1. 1. 1. 1. 0. 2. 2. 0. 0. 0. 0. 0. 0. 0. 3. 1. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 1. 0. 0. 3. 0. 0. 1. 0. 0. 0. 0. 3. 0. 4. 0. 0. 1. 1. 0. 0. 1. 1.
0. 0. 0. 0. 1. 1. 5. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 1. 0. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 1. 0. 0. 0. 0. 1. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2. 0. 0. 0. 0. 0. 0.
0. 1. 0. 0. 0. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.
0. 0. 0. 0. 0. 4. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
5d7c994d5720533e15c3b1e9 [0.02444444 0. 0.00111111 0.00111111 0.00222222 0.00222222
0.00222222 0.00333333 0.00222222 0. 0. 0.00444444
0.00111111 0. 0. 0. 0.00222222 0.00777778
0.00111111 0.02555556 0.00444444 0.00111111 0.01444444 0.00444444
0.00888889 0. 0.00777778 0.00777778 0.01666667 0.02111111
0.02222222 0.02777778 0.00222222 0.01888889 0.01444444 0.00111111
0.00111111 0.00111111 0.00111111 0. 0.00222222 0.00111111
0.01777778 0.02111111 0. 0. 0. 0.
0. 0. 0. 0.00111111 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.00111111
0.00111111 0.01 0.00333333 0.00111111 0.00111111 0.
0. 0. 0.00666667 0. 0.00444444 0.02333333
0.00111111 0.00555556 0.00555556 0.00888889 0. 0.00222222
0. 0.02222222 0.00666667 0.02 0. 0.00444444
0.01111111 0.01 0. 0.00222222 0.00444444 0.00444444
0. 0.00555556 0.00777778 0.00777778 0.00888889 0.01
0.03 0. 0.00444444 0. 0. 0.
0. 0. 0. 0. 0. 0.00111111
0. 0.00555556 0. 0. 0. 0.00111111
0. 0. 0. 0.00111111 0. 0.
0. 0. 0. 0. 0. 0.
0.00111111 0.00444444 0.00222222 0. 0.00111111 0.00222222
0.01222222 0.00111111 0. 0.00333333 0.00111111 0.00666667
0.00222222 0.01111111 0.00444444 0.00777778 0.00333333 0.
0. 0.00111111 0. 0.00555556 0. 0.00111111
0. 0. 0. 0.02111111 0.00777778 0.
0.00222222 0. 0. 0.00777778 0. 0.
0.00111111 0. 0. 0.00222222 0. 0.
0. 0.00333333 0.00222222 0. 0. 0.
0. 0.00222222 0.00444444 0.00888889 0. 0.01
0. 0. 0.00111111 0.00111111 0.00222222 0.00111111
0.00222222 0.00888889 0. 0. 0. 0.01555556
0.00111111 0.00111111 0. 0.00111111 0.00111111 0.00111111
0.00111111 0.00111111 0. 0.00111111 0.01111111 0.
0. 0.01 0.00111111 0. 0.00444444 0.
0.00111111 0. 0. 0. 0.00444444 0.02777778
0. 0.00111111 0.00444444 0. 0. 0.
0.00222222 0.00333333 0. ]
5de50d768eab6401affbb135 [0.11 0. 0.02 0.01 0.02 0.02 0.02 0.03 0.02 0.01 0. 0.02 0.01 0.
0. 0. 0. 0.03 0. 0.07 0. 0. 0.02 0.02 0. 0. 0. 0.03
0.11 0.13 0.15 0.18 0.01 0.08 0.03 0.02 0. 0.01 0.01 0. 0. 0.
0.1 0.13 0. 0. 0. 0. 0. 0. 0. 0. 0.01 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01
0. 0. 0. 0. 0.02 0. 0. 0.09 0.01 0.03 0.01 0.04 0. 0.
0. 0.07 0.02 0.09 0. 0.01 0.03 0.04 0. 0.01 0.02 0.02 0. 0.03
0.03 0.03 0.03 0.05 0.14 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.01 0.01 0. 0.02 0. 0. 0. 0. 0. 0. 0.01 0. 0.
0. 0. 0. 0. 0. 0. 0.01 0. 0. 0. 0. 0. 0.07 0.01
0. 0. 0.01 0. 0. 0.05 0. 0.03 0. 0. 0. 0.02 0. 0.02
0. 0. 0. 0. 0. 0.08 0.01 0. 0. 0. 0. 0.01 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.01 0.02 0. 0.01
0.03 0.05 0. 0.02 0. 0. 0.01 0.01 0.02 0.01 0. 0.04 0. 0.
0. 0.09 0.01 0.01 0. 0.01 0.01 0.01 0.01 0.01 0. 0.01 0.05 0.
0. 0. 0.02 0. 0.01 0.02 0.01 0. 0. 0. 0.03 0.11 0. 0.01
0.02 0. 0. 0.01 0. 0.02 0. ]
###Markdown
**So we have successfully Calculated the user_profile and item_profile.** CREATING A FUNCTION FOR RECOMMENDATION **In this Section we will find the cosine similarity between a user and the categoroies, the higher the cosine value the higher will be the similarity**
###Code
# to find the cosine similarity we need to import cosine_similarity function
from sklearn.metrics.pairwise import cosine_similarity
# we will create a function which will take user_id as argument and will provide recomendation
# all other arguments will be set to some default values.
def recommender1(user_id, user_profiles = user_profiles, item_profiles=item_profiles):
# calculate the cosine similiraity between the item-profile of all categories and the users user-profile
# create a dictionary category as key and cosine value as the value
similarity = {}
for i in item_profiles:
similarity.update({i:cosine_similarity(user_profiles[user_id].reshape(1, -1), item_profiles[i].reshape(1, -1))})
# as we have found the similarity now we will sort it and show the user the posts of the top categories and which the user
# hasnot viewed yet
sorted_similarity = sorted(similarity.items(), key=lambda x: x[1], reverse=True)
# now we will filter those posts that the user hasnot viewed
user_posts = list(main_df[main_df['user_id']==user_id]['post_id'].unique())
# create recomendation list
recommendations = []
# displaying users viewed posts and categories
print("posts user has viewed:{}".format(user_posts))
print("users viewed categories:{}".format(list(main_df[main_df['user_id']==user_id]['category'].unique())))
for i in sorted_similarity:
category_posts = list(main_df[main_df['category']==i[0]]['post_id'].unique())
for j in category_posts:
if j not in user_posts:
recommendations.append([i[0], j])
# we will recommend top 20 posts to the user
if len(recommendations)==20:
break
for i in recommendations:
print(i)
recommender1(users[30])
###Output
posts user has viewed:['5ea7cd9610426255a7aa9bd2']
users viewed categories:['Business']
['Business', '5ea80a7e10426255a7aa9be2']
['Business', '5eadc2f710426255a7aa9ee4']
['Business', '5e4d359cf5561b1994c8e424']
['Business', '5e6eff8aed32a005135d6bb8']
['Business', '5e830dc8a3258347b42f23fb']
['Business', '5e978c7ca3258347b42f2b09']
['Business', '5e9028cea3258347b42f2736']
['Business', '5e8e13dca3258347b42f26f2']
['Business Strategies', '5ea80a7e10426255a7aa9be2']
['Business Strategies', '5e978c7ca3258347b42f2b09']
['Business Strategies', '5e8e13dca3258347b42f26f2']
['Venture Capitalist', '5eadc2f710426255a7aa9ee4']
['Venture Capitalist', '5e978c7ca3258347b42f2b09']
['Venture Capitalist', '5e8e13dca3258347b42f26f2']
['Business Enviorment', '5ea80a7e10426255a7aa9be2']
['Business Enviorment', '5eadc2f710426255a7aa9ee4']
['New Venture Planning', '5ea80a7e10426255a7aa9be2']
['New Venture Planning', '5eadc2f710426255a7aa9ee4']
['Business Organisation', '5ea80a7e10426255a7aa9be2']
['Business Organisation', '5e830dc8a3258347b42f23fb']
###Markdown
NEW
###Code
# 功能性函数
# searchCode() 用于搜寻股票代码
def searchCode():
isDone = False
print('+--------------输入想要查找的板块名--------------+')
while isDone == False:
user_input = str(input('搜索板块名:'))
if user_input == 'quit':
print('用户已取消操作!')
break
try:
print(user_input + '代码是:' + name_dict[user_input])
isDone = True
except KeyError:
print('板块不存在!')
# 重启UI (待议)
user_interface(SELECTED_DATA)
# printAllBankuai()
def printAllBankuai():
print('----------------以下是所有板块代码----------------')
element_list = [(k, codename_dict[k]) for k in sorted(codename_dict.keys())]
counter = 0
while counter < len(element_list):
print(element_list[counter][0] + ' - ' + element_list[counter][1])
counter += 1
print('-----------------------------------------------')
# 重启UI
user_interface(SELECTED_DATA)
def showAllSelected():
print(SELECTED_DATA)
if SELECTED_DATA[0]['name'] != '数据不存在':
SELECTED_DATA.show()
else:
print('没有分析完成的数据!')
user_interface(SELECTED_DATA)
# 用于加载SFrame (也许需要写一个return 待议)
def parseSFrame():
fileName = input('输入文件名:')
filePath = './SelectedData/' + fileName + '/'
SELECTED_DATA = tc.SFrame(data=filePath)
user_interface(SELECTED_DATA)
def user_interface(xuangu):
choice = greetings()
if choice == 'a':
xuangu = inputCode(xuangu)
return xuangu
elif choice == 's':
searchCode()
elif choice == 'l':
printAllBankuai()
elif choice == 'x':
showAllSelected()
elif choice == 'j':
parseSFrame()
else:
user_interface(SELECTED_DATA)
return xuangu
###Output
_____no_output_____ |
wordnet.ipynb | ###Markdown
Sample usage for wordnet WordNet Interface WordNet is just another NLTK corpus reader, and can be imported like this:
###Code
from nltk.corpus import wordnet as wn
wn.synsets('dog')
wn.synsets('dog', pos=wn.VERB)
wn.synset('dog.n.01')
print(wn.synset('dog.n.01').definition())
len(wn.synset('dog.n.01').examples())
print(wn.synset('dog.n.01').examples()[0])
wn.synset('dog.n.01').lemmas()
[str(lemma.name()) for lemma in wn.synset('dog.n.01').lemmas()]
wn.lemma('dog.n.01.dog').synset()
sorted(wn.langs())
wn.synsets(b'\xe7\x8a\xac'.decode('utf-8'), lang='jpn')
wn.synset('dog.n.01').lemma_names('ita')
wn.lemmas('cane', lang='ita')
sorted(wn.synset('dog.n.01').lemmas('dan'))
sorted(wn.synset("dog.n.01").lemmas("por"))
dog_lemma = wn.lemma(b'dog.n.01.c\xc3\xa3o'.decode('utf-8'), lang='por')
dog_lemma
dog_lemma.lang()
len(list(wn.all_lemma_names(pos='n', lang='jpn')))
dog = wn.synset('dog.n.01')
dog.hypernyms()
dog.hyponyms()
dog.member_holonyms()
dog.root_hypernyms()
wn.synset('dog.n.01').lowest_common_hypernyms(wn.synset('cat.n.01'))
good = wn.synset('good.a.01')
good.antonyms()
good.lemmas()[0].antonyms()
wn.synset_from_pos_and_offset('n', 4543158)
eat = wn.lemma('eat.v.03.eat')
eat
print(eat.key())
eat.count()
wn.lemma_from_key(eat.key())
wn.lemma_from_key(eat.key()).synset()
wn.lemma_from_key('feebleminded%5:00:00:retarded:00')
for lemma in wn.synset('eat.v.03').lemmas():
print(lemma, lemma.count())
for lemma in wn.lemmas('eat', 'v'):
print(lemma, lemma.count())
###Output
Lemma('eat.v.01.eat') 61
Lemma('eat.v.02.eat') 13
Lemma('feed.v.06.eat') 4
Lemma('eat.v.04.eat') 0
Lemma('consume.v.05.eat') 0
Lemma('corrode.v.01.eat') 0
|
Moon Data/Moon_Classification_Exercise.ipynb | ###Markdown
Moon Data ClassificationIn this notebook, you'll be tasked with building and deploying a **custom model** in SageMaker. Specifically, you'll define and train a custom, PyTorch neural network to create a binary classifier for data that is separated into two classes; the data looks like two moon shapes when it is displayed, and is often referred to as **moon data**.The notebook will be broken down into a few steps:* Generating the moon data* Loading it into an S3 bucket* Defining a PyTorch binary classifier* Completing a training script* Training and deploying the custom model* Evaluating its performanceBeing able to train and deploy custom models is a really useful skill to have. Especially in applications that may not be easily solved by traditional algorithms like a LinearLearner.--- Load in required libraries, below.
###Code
# data
import pandas as pd
import numpy as np
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Generating Moon DataBelow, I have written code to generate some moon data, using sklearn's [make_moons](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html) and [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).I'm specifying the number of data points and a noise parameter to use for generation. Then, displaying the resulting data.
###Code
# set data params
np.random.seed(0)
num_pts = 600
noise_val = 0.25
# generate data
# X = 2D points, Y = class labels (0 or 1)
X, Y = make_moons(num_pts, noise=noise_val)
# Split into test and training data
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.25, random_state=1)
# plot
# points are colored by class, Y_train
# 0 labels = purple, 1 = yellow
plt.figure(figsize=(8,5))
plt.scatter(X_train[:,0], X_train[:,1], c=Y_train)
plt.title('Moon Data')
plt.show()
###Output
_____no_output_____
###Markdown
SageMaker ResourcesThe below cell stores the SageMaker session and role (for creating estimators and models), and creates a default S3 bucket. After creating this bucket, you can upload any locally stored data to S3.
###Code
# sagemaker
import boto3
import sagemaker
from sagemaker import get_execution_role
# SageMaker session and role
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
# default S3 bucket
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
EXERCISE: Create csv filesDefine a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`. SageMaker expects `.csv` files to be in a certain format, according to the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html):> Amazon SageMaker requires that a CSV file doesn't have a header record and that the target variable is in the first column.It may be useful to use pandas to merge your features and labels into one DataFrame and then convert that into a `.csv` file. When you create a `.csv` file, make sure to set `header=False`, and `index=False` so you don't include anything extraneous, like column names, in the `.csv` file.
###Code
import os
def make_csv(x, y, filename, data_dir):
'''Merges features and labels and converts them into one csv file with labels in the first column.
:param x: Data features
:param y: Data labels
:param file_name: Name of csv file, ex. 'train.csv'
:param data_dir: The directory where files will be saved
'''
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# your code here
pd.concat([pd.DataFrame(y), pd.DataFrame(x)], axis = 1).to_csv(os.path.join(data_dir, filename),
header = False, index = False)
# nothing is returned, but a print statement indicates that the function has run
print('Path created: '+str(data_dir)+'/'+str(filename))
###Output
_____no_output_____
###Markdown
The next cell runs the above function to create a `train.csv` file in a specified directory.
###Code
data_dir = 'data_moon' # the folder we will use for storing data
name = 'train.csv'
# create 'train.csv'
make_csv(X_train, Y_train, name, data_dir)
###Output
Path created: data_moon/train.csv
###Markdown
Upload Data to S3Upload locally-stored `train.csv` file to S3 by using `sagemaker_session.upload_data`. This function needs to know: where the data is saved locally, and where to upload in S3 (a bucket and prefix).
###Code
# specify where to upload in S3
prefix = 'moon-data'
# upload to S3
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
print(input_data)
###Output
s3://sagemaker-us-east-1-441543649966/moon-data
###Markdown
Check that you've uploaded the data, by printing the contents of the default bucket.
###Code
# iterate through S3 objects and print contents
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
print(obj.key)
###Output
_____no_output_____
###Markdown
--- ModelingNow that you've uploaded your training data, it's time to define and train a model!In this notebook, you'll define and train a **custom PyTorch model**; a neural network that performs binary classification. EXERCISE: Define a model in `model.py`To implement a custom classifier, the first thing you'll do is define a neural network. You've been give some starting code in the directory `source`, where you can find the file, `model.py`. You'll need to complete the class `SimpleNet`; specifying the layers of the neural network and its feedforward behavior. It may be helpful to review the [code for a 3-layer MLP](https://github.com/udacity/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb).This model should be designed to: * Accept a number of `input_dim` features* Create some linear, hidden layers of a desired size* Return **a single output value** that indicates the class scoreThe returned output value should be a [sigmoid-activated](https://pytorch.org/docs/stable/nn.htmlsigmoid) class score; a value between 0-1 that can be rounded to get a predicted, class label.Below, you can use !pygmentize to display the code in the `model.py` file. Read through the code; all of your tasks are marked with TODO comments. You should navigate to the file, and complete the tasks to define a `SimpleNet`.
###Code
!pygmentize source/model.py
###Output
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m[04m[36m.[39;49;00m[04m[36mfunctional[39;49;00m [34mas[39;49;00m [04m[36mF[39;49;00m
[37m## TODO: Complete this classifier[39;49;00m
[34mclass[39;49;00m [04m[32mSimpleNet[39;49;00m(nn.Module):
[37m## TODO: Define the init function[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, input_dim, hidden_dim, output_dim):
[33m'''Defines layers of a neural network.[39;49;00m
[33m :param input_dim: Number of input features[39;49;00m
[33m :param hidden_dim: Size of hidden layer(s)[39;49;00m
[33m :param output_dim: Number of outputs[39;49;00m
[33m '''[39;49;00m
[36msuper[39;49;00m(SimpleNet, [36mself[39;49;00m).[32m__init__[39;49;00m()
[37m# define all layers, here[39;49;00m
[37m# fully connected layers[39;49;00m
[36mself[39;49;00m.fc_in = nn.Linear(input_dim, hidden_dim)
[36mself[39;49;00m.fc_hidden = nn.Linear(hidden_dim, hidden_dim)
[36mself[39;49;00m.fc_out = nn.Linear(hidden_dim, output_dim)
[37m# dropout layer[39;49;00m
[36mself[39;49;00m.drop = nn.Dropout([34m0.5[39;49;00m)
[37m# Sigmoid layer for classification[39;49;00m
[36mself[39;49;00m.sig = nn.Sigmoid()
[37m## TODO: Define the feedforward behavior of the network[39;49;00m
[34mdef[39;49;00m [32mforward[39;49;00m([36mself[39;49;00m, x):
[33m'''Feedforward behavior of the net.[39;49;00m
[33m :param x: A batch of input features[39;49;00m
[33m :return: A single, sigmoid activated value[39;49;00m
[33m '''[39;49;00m
[37m# your code, here[39;49;00m
[37m# 10 hidden layers with a dropout layer between each[39;49;00m
x = F.relu([36mself[39;49;00m.fc_in(x))
x = [36mself[39;49;00m.drop(x)
[34mfor[39;49;00m i [35min[39;49;00m [36mrange[39;49;00m([34m9[39;49;00m):
x = F.relu([36mself[39;49;00m.fc_hidden(x))
x = [36mself[39;49;00m.drop(x)
[37m# last hidden layer with sigmoid activation for output[39;49;00m
x = [36mself[39;49;00m.fc_out(x)
x = [36mself[39;49;00m.sig(x)
[34mreturn[39;49;00m x
###Markdown
Training ScriptTo implement a custom classifier, you'll also need to complete a `train.py` script. You can find this in the `source` directory.A typical training script:* Loads training data from a specified directory* Parses any training & model hyperparameters (ex. nodes in a neural network, training epochs, etc.)* Instantiates a model of your design, with any specified hyperparams* Trains that model* Finally, saves the model so that it can be hosted/deployed, later EXERCISE: Complete the `train.py` scriptMuch of the training script code is provided for you. Almost all of your work will be done in the if __name__ == '__main__': section. To complete the `train.py` file, you will:* Define any additional model training hyperparameters using `parser.add_argument`* Define a model in the if __name__ == '__main__': section* Train the model in that same sectionBelow, you can use !pygmentize to display an existing train.py file. Read through the code; all of your tasks are marked with TODO comments.
###Code
!pygmentize source/train.py
###Output
[34mfrom[39;49;00m [04m[36m__future__[39;49;00m [34mimport[39;49;00m print_function [37m# future proof[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
[37m# pytorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mnn[39;49;00m [34mas[39;49;00m [04m[36mnn[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36moptim[39;49;00m [34mas[39;49;00m [04m[36moptim[39;49;00m
[34mimport[39;49;00m [04m[36mtorch[39;49;00m[04m[36m.[39;49;00m[04m[36mutils[39;49;00m[04m[36m.[39;49;00m[04m[36mdata[39;49;00m
[37m# import model[39;49;00m
[34mfrom[39;49;00m [04m[36mmodel[39;49;00m [34mimport[39;49;00m SimpleNet
[34mdef[39;49;00m [32mmodel_fn[39;49;00m(model_dir):
[36mprint[39;49;00m([33m"[39;49;00m[33mLoading model.[39;49;00m[33m"[39;49;00m)
[37m# First, load the parameters used to create the model.[39;49;00m
model_info = {}
model_info_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = torch.load(f)
[36mprint[39;49;00m([33m"[39;49;00m[33mmodel_info: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(model_info))
[37m# Determine the device and construct the model.[39;49;00m
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
model = SimpleNet(model_info[[33m'[39;49;00m[33minput_dim[39;49;00m[33m'[39;49;00m],
model_info[[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m],
model_info[[33m'[39;49;00m[33moutput_dim[39;49;00m[33m'[39;49;00m])
[37m# Load the stored model parameters.[39;49;00m
model_path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_path, [33m'[39;49;00m[33mrb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model.load_state_dict(torch.load(f))
[34mreturn[39;49;00m model.to(device)
[37m# Load the training data from a csv file[39;49;00m
[34mdef[39;49;00m [32m_get_train_loader[39;49;00m(batch_size, data_dir):
[36mprint[39;49;00m([33m"[39;49;00m[33mGet data loader.[39;49;00m[33m"[39;49;00m)
[37m# read in csv file[39;49;00m
train_data = pd.read_csv(os.path.join(data_dir, [33m"[39;49;00m[33mtrain.csv[39;49;00m[33m"[39;49;00m), header=[34mNone[39;49;00m, names=[34mNone[39;49;00m)
[37m# labels are first column[39;49;00m
train_y = torch.from_numpy(train_data[[[34m0[39;49;00m]].values).float().squeeze()
[37m# features are the rest[39;49;00m
train_x = torch.from_numpy(train_data.drop([[34m0[39;49;00m], axis=[34m1[39;49;00m).values).float()
[37m# create dataset[39;49;00m
train_ds = torch.utils.data.TensorDataset(train_x, train_y)
[34mreturn[39;49;00m torch.utils.data.DataLoader(train_ds, batch_size=batch_size)
[37m# Provided train function[39;49;00m
[34mdef[39;49;00m [32mtrain[39;49;00m(model, train_loader, epochs, optimizer, criterion, device):
[33m"""[39;49;00m
[33m This is the training method that is called by the PyTorch training script. The parameters[39;49;00m
[33m passed are as follows:[39;49;00m
[33m model - The PyTorch model that we wish to train.[39;49;00m
[33m train_loader - The PyTorch DataLoader that should be used during training.[39;49;00m
[33m epochs - The total number of epochs to train for.[39;49;00m
[33m optimizer - The optimizer to use during training.[39;49;00m
[33m criterion - The loss function used for training. [39;49;00m
[33m device - Where the model and data should be loaded (gpu or cpu).[39;49;00m
[33m """[39;49;00m
[34mfor[39;49;00m epoch [35min[39;49;00m [36mrange[39;49;00m([34m1[39;49;00m, epochs + [34m1[39;49;00m):
model.train()
total_loss = [34m0[39;49;00m
[34mfor[39;49;00m batch_idx, (data, target) [35min[39;49;00m [36menumerate[39;49;00m(train_loader, [34m1[39;49;00m):
[37m# prep data[39;49;00m
data, target = data.to(device), target.to(device)
optimizer.zero_grad() [37m# zero accumulated gradients[39;49;00m
[37m# get output of SimpleNet[39;49;00m
output = model(data)
[37m# calculate loss and perform backprop[39;49;00m
loss = criterion(output, target)
loss.backward()
optimizer.step()
total_loss += loss.item()
[37m# print loss stats[39;49;00m
[36mprint[39;49;00m([33m"[39;49;00m[33mEpoch: [39;49;00m[33m{}[39;49;00m[33m, Loss: [39;49;00m[33m{}[39;49;00m[33m"[39;49;00m.format(epoch, total_loss / [36mlen[39;49;00m(train_loader)))
[37m# save after all epochs[39;49;00m
save_model(model, args.model_dir)
[37m# Provided model saving functions[39;49;00m
[34mdef[39;49;00m [32msave_model[39;49;00m(model, model_dir):
[36mprint[39;49;00m([33m"[39;49;00m[33mSaving the model.[39;49;00m[33m"[39;49;00m)
path = os.path.join(model_dir, [33m'[39;49;00m[33mmodel.pth[39;49;00m[33m'[39;49;00m)
[37m# save state dictionary[39;49;00m
torch.save(model.cpu().state_dict(), path)
[34mdef[39;49;00m [32msave_model_params[39;49;00m(model, model_dir):
model_info_path = os.path.join(args.model_dir, [33m'[39;49;00m[33mmodel_info.pth[39;49;00m[33m'[39;49;00m)
[34mwith[39;49;00m [36mopen[39;49;00m(model_info_path, [33m'[39;49;00m[33mwb[39;49;00m[33m'[39;49;00m) [34mas[39;49;00m f:
model_info = {
[33m'[39;49;00m[33minput_dim[39;49;00m[33m'[39;49;00m: args.input_dim,
[33m'[39;49;00m[33mhidden_dim[39;49;00m[33m'[39;49;00m: args.hidden_dim,
[33m'[39;49;00m[33moutput_dim[39;49;00m[33m'[39;49;00m: args.output_dim
}
torch.save(model_info, f)
[37m## TODO: Complete the main code[39;49;00m
[34mif[39;49;00m [31m__name__[39;49;00m == [33m'[39;49;00m[33m__main__[39;49;00m[33m'[39;49;00m:
[37m# All of the model parameters and training parameters are sent as arguments[39;49;00m
[37m# when this script is executed, during a training job[39;49;00m
[37m# Here we set up an argument parser to easily access the parameters[39;49;00m
parser = argparse.ArgumentParser()
[37m# SageMaker parameters, like the directories for training data and saving models; set automatically[39;49;00m
[37m# Do not need to change[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--hosts[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mlist[39;49;00m, default=json.loads(os.environ[[33m'[39;49;00m[33mSM_HOSTS[39;49;00m[33m'[39;49;00m]))
parser.add_argument([33m'[39;49;00m[33m--current-host[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_CURRENT_HOST[39;49;00m[33m'[39;49;00m])
parser.add_argument([33m'[39;49;00m[33m--model-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m'[39;49;00m])
parser.add_argument([33m'[39;49;00m[33m--data-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ[[33m'[39;49;00m[33mSM_CHANNEL_TRAIN[39;49;00m[33m'[39;49;00m])
[37m# Training Parameters, given[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--batch-size[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m64[39;49;00m, metavar=[33m'[39;49;00m[33mN[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33minput batch size for training (default: 64)[39;49;00m[33m'[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--epochs[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m10[39;49;00m, metavar=[33m'[39;49;00m[33mN[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mnumber of epochs to train (default: 10)[39;49;00m[33m'[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--lr[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mfloat[39;49;00m, default=[34m0.001[39;49;00m, metavar=[33m'[39;49;00m[33mLR[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mlearning rate (default: 0.001)[39;49;00m[33m'[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--seed[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m1[39;49;00m, metavar=[33m'[39;49;00m[33mS[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mrandom seed (default: 1)[39;49;00m[33m'[39;49;00m)
[37m## TODO: Add args for the three model parameters: input_dim, hidden_dim, output_dim[39;49;00m
[37m# Model parameters[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--input_dim[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m2[39;49;00m, metavar=[33m'[39;49;00m[33mIN[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mdimention of the input layer (default: 2)[39;49;00m[33m'[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--hidden_dim[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m10[39;49;00m, metavar=[33m'[39;49;00m[33mH[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mdimension of the hidden layers (default: 10)[39;49;00m[33m'[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--output_dim[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m1[39;49;00m, metavar=[33m'[39;49;00m[33mOUT[39;49;00m[33m'[39;49;00m,
help=[33m'[39;49;00m[33mdimension of the output layer (default: 1)[39;49;00m[33m'[39;49;00m)
args = parser.parse_args()
device = torch.device([33m"[39;49;00m[33mcuda[39;49;00m[33m"[39;49;00m [34mif[39;49;00m torch.cuda.is_available() [34melse[39;49;00m [33m"[39;49;00m[33mcpu[39;49;00m[33m"[39;49;00m)
[37m# set the seed for generating random numbers[39;49;00m
torch.manual_seed(args.seed)
[34mif[39;49;00m torch.cuda.is_available():
torch.cuda.manual_seed(args.seed)
[37m# get train loader[39;49;00m
train_loader = _get_train_loader(args.batch_size, args.data_dir) [37m# data_dir from above..[39;49;00m
[37m## TODO: Build the model by passing in the input params[39;49;00m
[37m# To get params from the parser, call args.argument_name, ex. args.epochs or ards.hidden_dim[39;49;00m
[37m# Don't forget to move your model .to(device) to move to GPU , if appropriate[39;49;00m
model = SimpleNet(args.input_dim, args.hidden_dim, args.output_dim).to(device)
[37m# Given: save the parameters used to construct the model[39;49;00m
save_model_params(model, args.model_dir)
[37m## TODO: Define an optimizer and loss function for training[39;49;00m
optimizer = optim.Adam(model.parameters(), lr = args.lr)
criterion = nn.BCELoss()
[37m# Trains the model (given line of code, which calls the above training function)[39;49;00m
[37m# This function *also* saves the model state dictionary[39;49;00m
train(model, train_loader, args.epochs, optimizer, criterion, device)
###Markdown
EXERCISE: Create a PyTorch EstimatorYou've had some practice instantiating built-in models in SageMaker. All estimators require some constructor arguments to be passed in. When a custom model is constructed in SageMaker, an **entry point** must be specified. The entry_point is the training script that will be executed when the model is trained; the `train.py` function you specified above! See if you can complete this task, instantiating a PyTorch estimator, using only the [PyTorch estimator documentation](https://sagemaker.readthedocs.io/en/stable/sagemaker.pytorch.html) as a resource. It is suggested that you use the **latest version** of PyTorch as the optional `framework_version` parameter. Instance TypesIt is suggested that you use instances that are available in the free tier of usage: `'ml.c4.xlarge'` for training and `'ml.t2.medium'` for deployment.
###Code
# import a PyTorch wrapper
from sagemaker.pytorch import PyTorch
# specify an output path
output_path = f's3://{bucket}/{prefix}'
# instantiate a pytorch estimator
estimator = PyTorch(entry_point = 'train.py',
source_dir = 'source',
instance_count = 1,
instance_type = 'ml.c4.xlarge',
framework_version = '1.5.0',
py_version = 'py3',
role = role,
output_path = output_path,
sagemaker_session = sagemaker_session,
hyperparameters = {
'input_dim':2,
'hidden_dim':60,
'output_dim':1,
'epochs':120
})
###Output
_____no_output_____
###Markdown
Train the EstimatorAfter instantiating your estimator, train it with a call to `.fit()`. The `train.py` file explicitly loads in `.csv` data, so you do not need to convert the input data to any other format.
###Code
%%time
# train the estimator on S3 training data
estimator.fit({'train': input_data})
###Output
2021-01-26 16:44:49 Starting - Starting the training job...
2021-01-26 16:45:15 Starting - Launching requested ML instancesProfilerReport-1611679488: InProgress
.........
2021-01-26 16:46:36 Starting - Preparing the instances for training......
2021-01-26 16:47:43 Downloading - Downloading input data...
2021-01-26 16:48:18 Training - Downloading the training image..[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2021-01-26 16:48:30,719 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training[0m
[34m2021-01-26 16:48:30,722 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-01-26 16:48:30,733 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.[0m
[34m2021-01-26 16:48:33,759 sagemaker_pytorch_container.training INFO Invoking user training script.[0m
[34m2021-01-26 16:48:34,129 sagemaker-containers INFO Module default_user_module_name does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2021-01-26 16:48:34,129 sagemaker-containers INFO Generating setup.cfg[0m
[34m2021-01-26 16:48:34,129 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2021-01-26 16:48:34,130 sagemaker-containers INFO Installing module with the following command:[0m
[34m/opt/conda/bin/python -m pip install . [0m
[34mProcessing /tmp/tmpgn0zntx9/module_dir[0m
[34mBuilding wheels for collected packages: default-user-module-name
Building wheel for default-user-module-name (setup.py): started
Building wheel for default-user-module-name (setup.py): finished with status 'done'
Created wheel for default-user-module-name: filename=default_user_module_name-1.0.0-py2.py3-none-any.whl size=14392 sha256=2405983a036a282f0e65a0115903bb2638f63c7b1e1a151b77c350b4a3a832fe
Stored in directory: /tmp/pip-ephem-wheel-cache-6pxsf5pl/wheels/a7/81/b0/38f5507e99f8d74e89f0cb47bd7b8ec5d44a81abaa31568828[0m
[34mSuccessfully built default-user-module-name[0m
[34mInstalling collected packages: default-user-module-name[0m
[34mSuccessfully installed default-user-module-name-1.0.0[0m
[34mWARNING: You are using pip version 20.1; however, version 21.0 is available.[0m
[34mYou should consider upgrading via the '/opt/conda/bin/python -m pip install --upgrade pip' command.[0m
[34m2021-01-26 16:48:36,473 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-01-26 16:48:36,487 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-01-26 16:48:36,501 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2021-01-26 16:48:36,513 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {},
"channel_input_dirs": {
"train": "/opt/ml/input/data/train"
},
"current_host": "algo-1",
"framework_module": "sagemaker_pytorch_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"input_dim": 2,
"hidden_dim": 60,
"epochs": 120,
"output_dim": 1
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"train": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "pytorch-training-2021-01-26-16-44-48-828",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-441543649966/pytorch-training-2021-01-26-16-44-48-828/source/sourcedir.tar.gz",
"module_name": "train",
"network_interface_name": "eth0",
"num_cpus": 4,
"num_gpus": 0,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "train.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"epochs":120,"hidden_dim":60,"input_dim":2,"output_dim":1}[0m
[34mSM_USER_ENTRY_POINT=train.py[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=["train"][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=train[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_NUM_GPUS=0[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-441543649966/pytorch-training-2021-01-26-16-44-48-828/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{"train":"/opt/ml/input/data/train"},"current_host":"algo-1","framework_module":"sagemaker_pytorch_container.training:main","hosts":["algo-1"],"hyperparameters":{"epochs":120,"hidden_dim":60,"input_dim":2,"output_dim":1},"input_config_dir":"/opt/ml/input/config","input_data_config":{"train":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"pytorch-training-2021-01-26-16-44-48-828","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-441543649966/pytorch-training-2021-01-26-16-44-48-828/source/sourcedir.tar.gz","module_name":"train","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train.py"}[0m
[34mSM_USER_ARGS=["--epochs","120","--hidden_dim","60","--input_dim","2","--output_dim","1"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAIN=/opt/ml/input/data/train[0m
[34mSM_HP_INPUT_DIM=2[0m
[34mSM_HP_HIDDEN_DIM=60[0m
[34mSM_HP_EPOCHS=120[0m
[34mSM_HP_OUTPUT_DIM=1[0m
[34mPYTHONPATH=/opt/ml/code:/opt/conda/bin:/opt/conda/lib/python36.zip:/opt/conda/lib/python3.6:/opt/conda/lib/python3.6/lib-dynload:/opt/conda/lib/python3.6/site-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/opt/conda/bin/python train.py --epochs 120 --hidden_dim 60 --input_dim 2 --output_dim 1
[0m
[34mGet data loader.[0m
2021-01-26 16:48:45 Uploading - Uploading generated training model[34m[2021-01-26 16:48:39.823 algo-1:44 INFO json_config.py:90] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[2021-01-26 16:48:39.824 algo-1:44 INFO hook.py:183] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[2021-01-26 16:48:39.824 algo-1:44 INFO hook.py:228] Saving to /opt/ml/output/tensors[0m
[34m[2021-01-26 16:48:39.824 algo-1:44 INFO hook.py:364] Monitoring the collections: losses[0m
[34m[2021-01-26 16:48:39.825 algo-1:44 INFO hook.py:422] Hook is writing from the hook with pid: 44
[0m
[34mEpoch: 1, Loss: 0.6908036097884178[0m
[34mEpoch: 2, Loss: 0.6920408830046654[0m
[34mEpoch: 3, Loss: 0.6916394382715225[0m
[34mEpoch: 4, Loss: 0.6889793649315834[0m
[34mEpoch: 5, Loss: 0.6905276104807854[0m
[34mEpoch: 6, Loss: 0.6898483708500862[0m
[34mEpoch: 7, Loss: 0.6898057237267494[0m
[34mEpoch: 8, Loss: 0.6885049715638161[0m
[34mEpoch: 9, Loss: 0.6897438317537308[0m
[34mEpoch: 10, Loss: 0.6871912851929665[0m
[34mEpoch: 11, Loss: 0.687233991920948[0m
[34mEpoch: 12, Loss: 0.6891267746686935[0m
[34mEpoch: 13, Loss: 0.6894926726818085[0m
[34mEpoch: 14, Loss: 0.6889978349208832[0m
[34mEpoch: 15, Loss: 0.6883935779333115[0m
[34mEpoch: 16, Loss: 0.6878218278288841[0m
[34mEpoch: 17, Loss: 0.689660519361496[0m
[34mEpoch: 18, Loss: 0.6870928555727005[0m
[34mEpoch: 19, Loss: 0.6890536695718765[0m
[34mEpoch: 20, Loss: 0.6882910281419754[0m
[34mEpoch: 21, Loss: 0.6862850710749626[0m
[34mEpoch: 22, Loss: 0.6871005520224571[0m
[34mEpoch: 23, Loss: 0.6741482466459274[0m
[34mEpoch: 24, Loss: 0.6890128701925278[0m
[34mEpoch: 25, Loss: 0.6810644268989563[0m
[34mEpoch: 26, Loss: 0.6463201493024826[0m
[34mEpoch: 27, Loss: 0.6406928300857544[0m
[34mEpoch: 28, Loss: 0.6542286574840546[0m
[34mEpoch: 29, Loss: 0.6180807203054428[0m
[34mEpoch: 30, Loss: 0.5901653580367565[0m
[34mEpoch: 31, Loss: 0.6235450878739357[0m
[34mEpoch: 32, Loss: 0.6167617067694664[0m
[34mEpoch: 33, Loss: 0.5810073465108871[0m
[34mEpoch: 34, Loss: 0.608798049390316[0m
[34mEpoch: 35, Loss: 0.6098298355937004[0m
[34mEpoch: 36, Loss: 0.5668461881577969[0m
[34mEpoch: 37, Loss: 0.5835252404212952[0m
[34mEpoch: 38, Loss: 0.5514932535588741[0m
[34mEpoch: 39, Loss: 0.5606307163834572[0m
[34mEpoch: 40, Loss: 0.5231835693120956[0m
[34mEpoch: 41, Loss: 0.5834812261164188[0m
[34mEpoch: 42, Loss: 0.5068818517029285[0m
[34mEpoch: 43, Loss: 0.7984053269028664[0m
[34mEpoch: 44, Loss: 0.48848166689276695[0m
[34mEpoch: 45, Loss: 0.5412987396121025[0m
[34mEpoch: 46, Loss: 0.5111497864127159[0m
[34mEpoch: 47, Loss: 0.4768614359200001[0m
[34mEpoch: 48, Loss: 0.5140528008341789[0m
[34mEpoch: 49, Loss: 0.504484087228775[0m
[34mEpoch: 50, Loss: 0.6039571538567543[0m
[34mEpoch: 51, Loss: 0.49702345952391624[0m
[34mEpoch: 52, Loss: 0.4821183569729328[0m
[34mEpoch: 53, Loss: 0.5134796462953091[0m
[34mEpoch: 54, Loss: 0.46601760387420654[0m
[34mEpoch: 55, Loss: 0.505382813513279[0m
[34mEpoch: 56, Loss: 0.4410233795642853[0m
[34mEpoch: 57, Loss: 0.3817650377750397[0m
[34mEpoch: 58, Loss: 0.7113528028130531[0m
[34mEpoch: 59, Loss: 0.4524150863289833[0m
[34mEpoch: 60, Loss: 0.4655093811452389[0m
[34mEpoch: 61, Loss: 0.5338554568588734[0m
[34mEpoch: 62, Loss: 0.5649877600371838[0m
[34mEpoch: 63, Loss: 0.4654165916144848[0m
[34mEpoch: 64, Loss: 0.4766323193907738[0m
[34mEpoch: 65, Loss: 0.4723433740437031[0m
[34mEpoch: 66, Loss: 0.4797331467270851[0m
[34mEpoch: 67, Loss: 0.45389315858483315[0m
[34mEpoch: 68, Loss: 0.41504909470677376[0m
[34mEpoch: 69, Loss: 0.4974297136068344[0m
[34mEpoch: 70, Loss: 0.454141091555357[0m
[34mEpoch: 71, Loss: 0.42979762703180313[0m
[34mEpoch: 72, Loss: 0.43634628877043724[0m
[34mEpoch: 73, Loss: 0.4146912060678005[0m
[34mEpoch: 74, Loss: 0.40102680772542953[0m
[34mEpoch: 75, Loss: 0.5323594473302364[0m
[34mEpoch: 76, Loss: 0.3965425118803978[0m
[34mEpoch: 77, Loss: 0.4876905754208565[0m
[34mEpoch: 78, Loss: 0.4221675172448158[0m
[34mEpoch: 79, Loss: 0.46357808634638786[0m
[34mEpoch: 80, Loss: 0.39132950082421303[0m
[34mEpoch: 81, Loss: 0.42814962938427925[0m
[34mEpoch: 82, Loss: 0.36541498079895973[0m
[34mEpoch: 83, Loss: 0.3650229908525944[0m
[34mEpoch: 84, Loss: 0.3982105515897274[0m
[34mEpoch: 85, Loss: 0.3611805643886328[0m
[34mEpoch: 86, Loss: 0.6417031362652779[0m
[34mEpoch: 87, Loss: 0.45397766679525375[0m
[34mEpoch: 88, Loss: 0.4329790249466896[0m
[34mEpoch: 89, Loss: 0.4360342025756836[0m
[34mEpoch: 90, Loss: 0.45970072597265244[0m
[34mEpoch: 91, Loss: 0.472491517663002[0m
[34mEpoch: 92, Loss: 0.44455286487936974[0m
[34mEpoch: 93, Loss: 0.4621676504611969[0m
[34mEpoch: 94, Loss: 0.4056120291352272[0m
[34mEpoch: 95, Loss: 0.40046630054712296[0m
[34mEpoch: 96, Loss: 0.42284689098596573[0m
[34mEpoch: 97, Loss: 0.5503215827047825[0m
[34mEpoch: 98, Loss: 0.39570067822933197[0m
[34mEpoch: 99, Loss: 0.4493889883160591[0m
[34mEpoch: 100, Loss: 0.4556043781340122[0m
[34mEpoch: 101, Loss: 0.409470584243536[0m
[34mEpoch: 102, Loss: 0.4194321744143963[0m
[34mEpoch: 103, Loss: 0.4418772980570793[0m
[34mEpoch: 104, Loss: 0.3437572540715337[0m
[34mEpoch: 105, Loss: 0.3976924791932106[0m
[34mEpoch: 106, Loss: 0.41119642555713654[0m
[34mEpoch: 107, Loss: 0.3896511495113373[0m
[34mEpoch: 108, Loss: 0.3996008113026619[0m
[34mEpoch: 109, Loss: 0.41742073372006416[0m
[34mEpoch: 110, Loss: 0.3967522978782654[0m
[34mEpoch: 111, Loss: 0.4652267098426819[0m
[34mEpoch: 112, Loss: 0.3771080747246742[0m
[34mEpoch: 113, Loss: 0.3720633387565613[0m
[34mEpoch: 114, Loss: 0.3181770369410515[0m
[34mEpoch: 115, Loss: 0.41743091493844986[0m
[34mEpoch: 116, Loss: 0.38632725179195404[0m
[34mEpoch: 117, Loss: 0.39293285086750984[0m
[34mEpoch: 118, Loss: 0.32474648021161556[0m
[34mEpoch: 119, Loss: 0.3757524713873863[0m
[34mEpoch: 120, Loss: 0.4377058632671833[0m
[34mSaving the model.[0m
[34m[2021-01-26 16:48:44.380 algo-1:44 INFO utils.py:25] The end of training job file will not be written for jobs running under SageMaker.[0m
[34m2021-01-26 16:48:44,572 sagemaker-containers INFO Reporting training SUCCESS[0m
###Markdown
Create a Trained ModelPyTorch models do not automatically come with `.predict()` functions attached (as many Scikit-learn models do, for example) and you may have noticed that you've been give a `predict.py` file. This file is responsible for loading a trained model and applying it to passed in, numpy data. When you created a PyTorch estimator, you specified where the training script, `train.py` was located. > How can we tell a PyTorch model where the `predict.py` file is?Before you can deploy this custom PyTorch model, you have to take one more step: creating a `PyTorchModel`. In earlier exercises you could see that a call to `.deploy()` created both a **model** and an **endpoint**, but for PyTorch models, these steps have to be separate. EXERCISE: Instantiate a `PyTorchModel`You can create a `PyTorchModel` (different that a PyTorch estimator) from your trained, estimator attributes. This model is responsible for knowing how to execute a specific `predict.py` script. And this model is what you'll deploy to create an endpoint. Model ParametersTo instantiate a `PyTorchModel`, ([documentation, here](https://sagemaker.readthedocs.io/en/stable/sagemaker.pytorch.htmlsagemaker.pytorch.model.PyTorchModel)) you pass in the same arguments as your PyTorch estimator, with a few additions/modifications:* **model_data**: The trained `model.tar.gz` file created by your estimator, which can be accessed as `estimator.model_data`.* **entry_point**: This time, this is the path to the Python script SageMaker runs for **prediction** rather than training, `predict.py`.
###Code
%%time
# importing PyTorchModel
from sagemaker.pytorch import PyTorchModel
# Create a model from the trained estimator data
# And point to the prediction script
model = PyTorchModel(model_data = estimator.model_data,
entry_point = 'predict.py',
source_dir = 'source',
role = role,
framework_version = '1.5.0',
py_version = 'py3')
###Output
CPU times: user 10.7 ms, sys: 3.89 ms, total: 14.5 ms
Wall time: 91.1 ms
###Markdown
EXERCISE: Deploy the trained modelDeploy your model to create a predictor. We'll use this to make predictions on our test data and evaluate the model.
###Code
%%time
# deploy and create a predictor
predictor = model.deploy(initial_instance_count = 1, instance_type = 'ml.t2.medium')
###Output
-----------------------!CPU times: user 563 ms, sys: 41.1 ms, total: 605 ms
Wall time: 11min 35s
###Markdown
--- Evaluating Your ModelOnce your model is deployed, you can see how it performs when applied to the test data.The provided function below, takes in a deployed predictor, some test features and labels, and returns a dictionary of metrics; calculating false negatives and positives as well as recall, precision, and accuracy.
###Code
# code to evaluate the endpoint on test data
# returns a variety of model metrics
def evaluate(predictor, test_features, test_labels, verbose=True):
"""
Evaluate a model on a test set given the prediction endpoint.
Return binary classification metrics.
:param predictor: A prediction endpoint
:param test_features: Test features
:param test_labels: Class labels for test data
:param verbose: If True, prints a table of all performance metrics
:return: A dictionary of performance metrics.
"""
# rounding and squeezing array
test_preds = np.squeeze(np.round(predictor.predict(test_features)))
# calculate true positives, false positives, true negatives, false negatives
tp = np.logical_and(test_labels, test_preds).sum()
fp = np.logical_and(1-test_labels, test_preds).sum()
tn = np.logical_and(1-test_labels, 1-test_preds).sum()
fn = np.logical_and(test_labels, 1-test_preds).sum()
# calculate binary classification metrics
recall = tp / (tp + fn)
precision = tp / (tp + fp)
accuracy = (tp + tn) / (tp + fp + tn + fn)
# print metrics
if verbose:
print(pd.crosstab(test_labels, test_preds, rownames=['actuals'], colnames=['predictions']))
print("\n{:<11} {:.3f}".format('Recall:', recall))
print("{:<11} {:.3f}".format('Precision:', precision))
print("{:<11} {:.3f}".format('Accuracy:', accuracy))
print()
return {'TP': tp, 'FP': fp, 'FN': fn, 'TN': tn,
'Precision': precision, 'Recall': recall, 'Accuracy': accuracy}
###Output
_____no_output_____
###Markdown
Test ResultsThe cell below runs the `evaluate` function. The code assumes that you have a defined `predictor` and `X_test` and `Y_test` from previously-run cells.
###Code
# get metrics for custom predictor
metrics = evaluate(predictor, X_test, Y_test, True)
###Output
predictions 0.0 1.0
actuals
0 64 7
1 10 69
Recall: 0.873
Precision: 0.908
Accuracy: 0.887
###Markdown
Delete the EndpointFinally, I've add a convenience function to delete prediction endpoints after we're done with them. And if you're done evaluating the model, you should delete your model endpoint!
###Code
# Accepts a predictor endpoint as input
# And deletes the endpoint by name
def delete_endpoint(predictor):
try:
boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint)
print('Deleted {}'.format(predictor.endpoint))
except:
print('Already deleted: {}'.format(predictor.endpoint))
# delete the predictor endpoint
delete_endpoint(predictor)
###Output
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
|
jwst_validation_notebooks/flat_field/jwst_flat_field_miri_test/jwst_flat_field_miri_test.ipynb | ###Markdown
Testing flat_field step with MIRI simulated data SummaryThis notebook processes an image through calwebb_image2 and calwebb_image3 (calwebb_detector1 is optional) and examines the output table of the source_catalog step. The steps are as follow:1) Set up data path and directory and image file name.2) Modify header information of input simulations (if needed).3) Run input data through calwebb_detector1.4) Run output of calwebb_detector1 through the flat_field step in calwebb_image2.5) Get flat field reference file. 6) Compare the flat field reference file with the rate/cal image ratio and check that they are the same.The pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwstAuthor: T. Temim Set up import statements
###Code
import pytest
import numpy as np
from glob import glob
import json
import jwst
from astropy.io import fits, ascii
from astropy.coordinates import Angle
from astropy.table import Table, vstack, unique
from astropy.stats import sigma_clip
from jwst.pipeline import Detector1Pipeline, Image2Pipeline, Image3Pipeline
from jwst.associations import asn_from_list
import matplotlib.pyplot as plt
import random
from jwst import associations
from jwst.datamodels import RampModel
from astropy.io import fits
import numpy as np
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from jwst.pipeline import calwebb_image3
from jwst.pipeline import calwebb_image2
from jwst.pipeline import calwebb_detector1
from astropy.io import fits
from jwst.datamodels import ImageModel
from jwst import datamodels
from astropy.utils.data import get_pkg_data_filename
from ci_watson.artifactory_helpers import get_bigdata
from astropy import table
import crds
import os
os.environ['CRDS_PATH']='$HOME/crds_cache'
os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'
os.environ['CRDS_CONTEXT']='jwst_0619.pmap'
os.environ['TEST_BIGDATA']='https://bytesalad.stsci.edu/artifactory/'
#export CRDS_SERVER_URL=https://jwst-crds.stsci.edu
#export CRDS_PATH=$HOME/crds_cache
#export CRDS_CONTEXT='jwst_0619.pmap'
#export TEST_BIGDATA=https://bytesalad.stsci.edu/artifactory/
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory
###Code
input_file = get_bigdata('jwst_validation_notebooks',
'validation_data',
'source_catalog',
'source_catalog_miri_test',
'det_image_seq1_MIRIMAGE_F560Wexp1_rate.fits')
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
from jwst import datamodels
with datamodels.open(input_file) as im:
# raises exception if myimage.fits is not an image file
pass
###Output
_____no_output_____
###Markdown
Modify header information of input simulations (if needed)
###Code
print(im.meta.wcsinfo.wcsaxes)
im.meta.wcsinfo.wcsaxes=2
print(im.meta.wcsinfo.wcsaxes)
del im.meta.wcsinfo.cdelt3
del im.meta.wcsinfo.crpix3
del im.meta.wcsinfo.crval3
del im.meta.wcsinfo.ctype3
del im.meta.wcsinfo.cunit3
del im.meta.wcsinfo.pc3_1
del im.meta.wcsinfo.pc3_2
#del im.meta.wcsinfo.cdelt4
#del im.meta.wcsinfo.crpix4
#del im.meta.wcsinfo.crval4
#del im.meta.wcsinfo.ctype4
#del im.meta.wcsinfo.cunit4
###Output
_____no_output_____
###Markdown
Run input data through calwebb_detector1 (not done here)
###Code
#det1 = calwebb_detector1.Detector1Pipeline()
#det1.save_results = True
#det1.run(im)
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through calwebb_image2
###Code
input_file = input_file.replace('rateint.fits', 'rate.fits')
im2 = calwebb_image2.Image2Pipeline()
#im2.background.skip = True
im2.assign_wcs.skip = True
im2.flat_field.skip = False
im2.photom.skip=True
im2.resample.skip = True
im2.save_results = True
im2.run(im)
input_file = input_file.replace('rate.fits', 'cal.fits')
with datamodels.open(input_file) as im_cal:
# raises exception if myimage.fits is not an image file
pass
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im=im.data/im_cal.data
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
with datamodels.open(path) as flat_im:
# raises exception if myimage.fits is not an image file
pass
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equal
###Code
check_flat=ratio_im/flat_im.data
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Testing notebook: flat_field step with MIRI ImagingInstruments Affected: NIRCam, NIRSpec, NIRISS, MIRI, FGS Introduction This test is designed to test the flat_field step in the calwebb_image2 pipeline. This step retrieves the correct flat field reference file and divides the data by the reference file. The SCI extension of the reference file is divided into the SCI array of the science image. The DQ plane of the reference file is combined with the DQ plane of the science file. Error calculation: The VAR_POISSON and VAR_RNOISE variance arrays of the science exposure are divided by the square of the flat-field value for each pixel. A flat-field variance array, VAR_FLAT, is created from the science exposure and flat-field reference file data using the following formula:var_flat = SCI array ^ 2 / flat SCI array ^ 2 * flat err array ^2The total ERR array in the science exposure is updated as the square root of the quadratic sum of VAR_POISSON, VAR_RNOISE, and VAR_FLAT.Description of the steps applied: - If the science data have been taken using a subarray and the flat-field reference file is a full-frame image, extract the corresponding subarray region from the flat-field data.- Find pixels that have a value of NaN or zero in the FLAT reference file SCI array and set their DQ values to “NO_FLAT_FIELD.”- Reset the values of pixels in the flat that have DQ=”NO_FLAT_FIELD” to 1.0, so that they have no effect when applied to the science data.- Apply the flat by dividing it into the science exposure SCI array.- Propagate the FLAT reference file DQ values into the science exposure DQ array using a bitwise OR operation. DocumentationFor more information on the pipeline step visit the links below. The pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwst Test DescriptionThis notebook processes an image through calwebb_image2 (calwebb_detector1 is optional) and examines the output of the flat_field step. The steps are as follow:1) Retrieve data.2) Run output of calwebb_detector1 through the flat_field step in calwebb_image2. Visualize the sci arrays of the data before and after the flat_field step is applied. 3) Get flat field reference file. Look at the sci array of the flat_field reference file.4) Compare the flat field reference file with the rate/cal image ratio and check that they are the same.5) Look at the ERR arrays of the science data before and after the step is run, and compare to the flat_field reference file ERR array to be sure there is no unexpected pattern seen. Check that a new extension (var_flat) has been added to the output data.6) Check that the DQ flags were applied as expected. Data used The data used in this test is a simulated MIRI image created using MIRISim. The documentation for MIRISim can be found here: https://wiki.miricle.org/bin/view/Public/MIRISim_Public
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
print(data_dir)
import os
if 'CRDS_CACHE_TYPE' in os.environ:
if os.environ['CRDS_CACHE_TYPE'] == 'local':
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif os.path.isdir(os.environ['CRDS_CACHE_TYPE']):
os.environ['CRDS_PATH'] = os.environ['CRDS_CACHE_TYPE']
print('CRDS cache location: {}'.format(os.environ['CRDS_PATH']))
###Output
_____no_output_____
###Markdown
Set up import statementsSoftware imports:- astropy allows various data formats to be read in and written out as well as visualization tools for plotting- numpy provides the framework to work with arrays and standard calculations- matplotlib is a set of plotting software- jwst is all of the jwst calibration pipeline software being tested- download_file, move and get_bigdata are used in downloading the data to be used.
###Code
from astropy.io import fits, ascii
from astropy.utils.data import download_file
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from shutil import move
import jwst
from jwst.pipeline import Detector1Pipeline, Image2Pipeline
from jwst.flatfield import FlatFieldStep
from jwst.datamodels import RampModel, ImageModel, dqflags
from jwst.pipeline import calwebb_image2
from ci_watson.artifactory_helpers import get_bigdata
import crds
import os
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory (or Box)
###Code
#input_file = get_bigdata('jwst_validation_notebooks',
# 'validation_data',
# 'flat_field',
# 'flat_field_miri_test',
# 'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits')
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
#print(file_name)
file_list=[('https://stsci.box.com/shared/static/kzef4nvyzzpfy4x4o108x344qg5epaf0.fits',
'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits')]
get_box_files(file_list)
filename = file_list[0][1]
print(filename)
# Read in data from Box
#file_url = 'https://stsci.box.com/shared/static/kzef4nvyzzpfy4x4o108x344qg5epaf0.fits'
#filename = 'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits'
#input_file = download_file(file_url)
#print(input_file)
#ext = os.path.splitext(file_url)[1]
#new_input_file = input_file + ext
#move(input_file, new_input_file)
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
#im = ImageModel(new_input_file)
#im.save(filename)
im = ImageModel(filename)
im.info() # Make sure image was read into the model correctly and has the expected extensions
###Output
_____no_output_____
###Markdown
Display the rate (slope) file that is output of calwebb_detector1
###Code
plt.figure(figsize=(20,20))
plt.imshow(im.data, cmap='Greys', origin='lower', vmin=-2,vmax=10)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through the flat field step
###Code
im2 = FlatFieldStep()
im2.save_results = True
flatfile = im2.run(im)
# Read output of calwebb_image2 into a data model
im_cal = ImageModel(flatfile)
print(im_cal.meta.filename)
###Output
_____no_output_____
###Markdown
Display the calibrated data that is output of calwebb_image2
###Code
plt.figure(figsize=(20,20))
plt.imshow(im_cal.data, cmap='Greys', origin='lower', vmin=-2,vmax=10)#, norm=norm)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im = im.data / im_cal.data
print('Minimum and maximum values in the ratio image are:', np.nanmin(ratio_im), 'and', np.nanmax(ratio_im))
###Output
_____no_output_____
###Markdown
Display ratio imageThe ratio of the images calculated above should be comparable to the flat field reference file science extension.
###Code
plt.figure(figsize=(20,20))
# mask out DO_NOT_USE values of 1
masked_ratio = np.ma.masked_where((im_cal.dq & dqflags.pixel['DO_NOT_USE'] > 0), ratio_im)
cmap = matplotlib.cm.get_cmap("Greys").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white') # color to mark all DO_NOT_USE pixels
plt.imshow(masked_ratio, cmap=cmap, origin='lower', vmin=0,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
flat_im = ImageModel(path)
print(flat_im.meta.filename)
print('Minimum and maximum values in the ratio image are:', np.nanmin(flat_im.data), 'and', np.nanmax(flat_im.data))
###Output
_____no_output_____
###Markdown
Display flat field reference file data
###Code
plt.figure(figsize=(20,20))
plt.imshow(flat_im.data, cmap='Greys', origin='lower', vmin=0,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equalSince the step sets any flat field values to 1 where the DQ array lists the pixel as DO_NOT_USE, only a masked version of the images should be compared to the flat. Find regions where dq values are not marked as DO_NOT_USE and compare the good regions.
###Code
# Do a check on any specific pixel in the imager. The rate file divided by the flat file should equal the value
# in the flat_fielded output file.
xval = 600
yval = 600
print('Rate image pixel value', im.data[yval, xval])
print('Cal image pixel value', im_cal.data[yval, xval])
print('Flat pixel value', flat_im.data[yval, xval])
print('DQ value for flat file:', flat_im.dq[yval, xval])
div_val = im.data[yval, xval] / flat_im.data[yval, xval]
print('The rate file pixel divided by the flat file pixel is: ', div_val)
try:
assert im_cal.data[yval,xval] == im.data[yval, xval] / flat_im.data[yval, xval]
except:
print('Cal pixel does not equal rate divided by flat. There is a problem here.')
# mask out bad pixels, i.e. pixels marked as DO_NOT_USE in the reference file dq array
badpixels = np.where(flat_im.dq & dqflags.pixel['DO_NOT_USE'] > 0)
# Set bad pixels in images to nan so they are not part of calculations
good_im = im.data
good_im[badpixels] = np.nan
good_cal = im_cal.data
good_cal[badpixels] = np.nan
good_flat = flat_im.data
good_flat[badpixels] = np.nan
# Get the ratio of the masked images, and then divide by the masked flat image
test_ratio = good_im / good_cal
check_flat = test_ratio / good_flat
print('Minimum and maximum values in the ratio image are:', np.nanmin(test_ratio), 'and', np.nanmax(test_ratio))
###Output
_____no_output_____
###Markdown
View the ratio image divided by the flat field ((rate / flat_fielded image) / flat field reference file)The values of this image should be around 1.0. The flat fielded science image results from dividing the rate image by the flat field reference file image. So the ratio of the rate image divided by the flat_fielded image should equal the flat field reference file, meaning that ratio should equal 1.0.
###Code
plt.figure(figsize=(20,20))
plt.imshow(check_flat, cmap='Greys', origin='lower', vmin=0.5,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Check that min and max values of ratio image divided by the flat are 1.0
###Code
print('**************** Passing criteria check: Be sure that both of these values are near 1.0 *******')
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
try:
np.testing.assert_allclose(np.nanmin(check_flat), 1.0, rtol = 0.05)
except AssertionError:
print("AssertionError: The minimum value is not within 5% of 1.0")
try:
np.testing.assert_allclose(np.nanmax(check_flat), 1.0, rtol = 0.05)
except AssertionError:
print("AssertionError: The maximum value is not within 5% of 1.0")
###Output
_____no_output_____
###Markdown
Check ERR arraysThere should be a new ERR array (var_flat) attached. Check that var_flat extension was added to data after flat field step was run
###Code
# Look at extensions of the rate file
uncal_filename = str(im.meta.filename)
hdu = fits.open(uncal_filename)
hdu.info()
hdu.close()
# Look at the extensions of cal file and check that a new extension 'var_flat' was added
filename = str(im_cal.meta.filename)
hdu = fits.open(filename)
hdu.info()
hdu.close()
try:
assert(im_cal.var_flat.shape == im_cal.data.shape)
except AssertionError:
print('AssertionError: var_flat array is not the same shape as the data array')
###Output
_____no_output_____
###Markdown
Look at error arrays before and after flat field step to see if there are any unexplained changes
###Code
# ERR array of rate image
print('Min val: ', np.nanmin(im.err), ' Max val: ', np.nanmax(im.err))
plt.figure(figsize=(20,20))
plt.imshow(im.err, cmap='Greys', origin='lower', vmin=0,vmax=.5)
plt.colorbar()
plt.show()
# ERR array of flat_fielded image
print('Min val: ', np.nanmin(im_cal.err), ' Max val: ', np.nanmax(im_cal.err))
plt.figure(figsize=(20,20))
plt.imshow(im_cal.err, cmap='Greys', origin='lower', vmin=0,vmax=.5)
plt.colorbar()
plt.show()
# ERR array of flat reference file image
print('Min val: ', np.nanmin(im.err), ' Max val: ', np.nanmax(im.err))
plt.figure(figsize=(20,20))
plt.imshow(flat_im.err, cmap='Greys', origin='lower', vmin=0,vmax=.002)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Check DQ flagging Any pixel flagged as NON_SCIENCE should also be flagged as DO_NOT_USE. Check if this is in place in both the input reference file and for the output science file of the calwebb_image2 pipeline. If there are no assert errors, the test below passes.
###Code
# Check if the output cal file is flagged properly
# Test that all pixels flagged with NON_SCIENCE are also flagged as DO_NOT_USE
nonsciencearray = (im_cal.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (im_cal.dq & dqflags.pixel['DO_NOT_USE'] > 0)
try:
assert nonsciencearray.all() == badarray.all()
except AssertionError:
print('AssertionError: The NON_SCIENCE pixels are not equal to the DO_NOT_USE pixels in the flat_fielded file.')
# Test if the input reference file had the flags all set the same way
nonsciencearray = (flat_im.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (flat_im.dq & dqflags.pixel['DO_NOT_USE'] > 0)
try:
assert nonsciencearray.all() == badarray.all()
except AssertionError:
print('AssertionError: The NON_SCIENCE pixels are not equal to the DO_NOT_USE pixels in the input file.')
# Look at DQ planes of rate and cal files and make sure flat field reference file was added to the rate file.
rate_dq = im.dq
cal_dq = im_cal.dq
flat_dq = flat_im.dq
try:
assert cal_dq.all() == rate_dq.all() & flat_dq.all()
except AssertionError:
print('AssertionError: The dq plane of the reference file was not added to the input dq plane properly.')
###Output
_____no_output_____
###Markdown
Look at the dq planes to see how they change.The dq planes shown below show the rate file before the flat field step, the reference file dq plane, and the dq plane after the flat field step is applied.The regions marked with white have been set as 'DO_NOT_USE' in the dq plane. The images below should show that the 4QPM regions are marked as DO_NOT_USE by the flat field step. The rate image dq plane does not remove the 4QPM, but the flat field dq plane and the cal dq plane should both have the 4QPM regions marked as DO_NOT_USE.
###Code
# Look at the dq plane of the rate image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_array = np.ma.masked_where((rate_dq & dqflags.pixel['DO_NOT_USE'] > 0), rate_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_array, cmap=cmap, origin='lower', vmin=0,vmax=200)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Take a look at the dq plane of the flat field reference file.The dq definitions in the flat field file are as follows (from the dq_def extention) Value DQ Name 1 DO_NOT_USE 2 NON_SCIENCE 4 UNRELIABLE_FLAT 8 CDP_PARTIAL_DATA 16 CDP_LOW_QUAL 32 CDP_UNRELIABLE_ERROR 64 NO_FLAT_FIELD 128 DIFF_PATTERN If the pixel has an odd numbered value, it has been combined with the value 'DO_NOT_USE', and is not applied in the division of the science data by the flat. These 'bad' pixels are flagged in the following image by being shown in white. The purple pixels have values of zero, which indicate they are good science pixels.
###Code
# Look at the dq plane of the flat_field image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_flat = np.ma.masked_where((flat_dq & dqflags.pixel['DO_NOT_USE'] > 0), flat_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_flat, cmap=cmap, origin='lower', vmin=0,vmax=5)
plt.colorbar()
plt.show()
# Look at the dq plane of the cal image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_cal = np.ma.masked_where((cal_dq & dqflags.pixel['DO_NOT_USE'] > 0), cal_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_cal, cmap=cmap, origin='lower', vmin=0,vmax=200)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Take a look at what portion of the flat fielded image will be masked out in combined (image3 pipeline) dataTake the masked NaN region shown above and apply it to the flat fielded image to see what portion of the image will be masked out once calwebb_image3 is run and the DO_NOT_USE pixels are masked out.
###Code
# Look at the dq plane of the cal image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_cal = np.ma.masked_where((cal_dq & dqflags.pixel['DO_NOT_USE'] > 0), im_cal.data)
cmap = matplotlib.cm.get_cmap("Greys").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='blue') # Mark the DO_NOT_USE pixel color that will be masked out
plt.imshow(masked_cal, cmap=cmap, origin='lower', vmin=-2,vmax=10)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Testing flat_field step with MIRI simulated data SummaryThis notebook processes an image through calwebb_image2 and calwebb_image3 (calwebb_detector1 is optional) and examines the output table of the source_catalog step. The steps are as follow:1) Set up data path and directory and image file name.2) Modify header information of input simulations (if needed).3) Run input data through calwebb_detector1.4) Run output of calwebb_detector1 through the flat_field step in calwebb_image2.5) Get flat field reference file. 6) Compare the flat field reference file with the rate/cal image ratio and check that they are the same.The pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwstAuthor: T. Temim
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
###Output
_____no_output_____
###Markdown
Set up import statements
###Code
import pytest
import numpy as np
from glob import glob
import json
import jwst
from astropy.io import fits, ascii
from astropy.coordinates import Angle
from astropy.table import Table, vstack, unique
from astropy.stats import sigma_clip
from jwst.pipeline import Detector1Pipeline, Image2Pipeline, Image3Pipeline
from jwst.associations import asn_from_list
import matplotlib.pyplot as plt
import random
from jwst import associations
from jwst.datamodels import RampModel
from astropy.io import fits
import numpy as np
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from jwst.pipeline import calwebb_image3
from jwst.pipeline import calwebb_image2
from jwst.pipeline import calwebb_detector1
from astropy.io import fits
from jwst.datamodels import ImageModel
from jwst import datamodels
from astropy.utils.data import get_pkg_data_filename
from ci_watson.artifactory_helpers import get_bigdata
from astropy import table
import crds
import os
os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'
os.environ['CRDS_CONTEXT']='jwst_0619.pmap'
os.environ['TEST_BIGDATA']='https://bytesalad.stsci.edu/artifactory/'
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory
###Code
input_file = get_bigdata('jwst_validation_notebooks',
'validation_data',
'source_catalog',
'source_catalog_miri_test',
'det_image_seq1_MIRIMAGE_F560Wexp1_rate.fits')
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
from jwst import datamodels
im = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Modify header information of input simulations (if needed)
###Code
print(im.meta.wcsinfo.wcsaxes)
im.meta.wcsinfo.wcsaxes=2
print(im.meta.wcsinfo.wcsaxes)
del im.meta.wcsinfo.cdelt3
del im.meta.wcsinfo.crpix3
del im.meta.wcsinfo.crval3
del im.meta.wcsinfo.ctype3
del im.meta.wcsinfo.cunit3
del im.meta.wcsinfo.pc3_1
del im.meta.wcsinfo.pc3_2
#del im.meta.wcsinfo.cdelt4
#del im.meta.wcsinfo.crpix4
#del im.meta.wcsinfo.crval4
#del im.meta.wcsinfo.ctype4
#del im.meta.wcsinfo.cunit4
###Output
_____no_output_____
###Markdown
Run input data through calwebb_detector1 (not done here)
###Code
#det1 = calwebb_detector1.Detector1Pipeline()
#det1.save_results = True
#det1.run(im)
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through calwebb_image2
###Code
input_file = input_file.replace('rateint.fits', 'rate.fits')
im2 = calwebb_image2.Image2Pipeline()
#im2.background.skip = True
im2.assign_wcs.skip = True
im2.flat_field.skip = False
im2.photom.skip=True
im2.resample.skip = True
im2.save_results = True
im2.run(im)
input_file = input_file.replace('rate.fits', 'cal.fits')
im_cal = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im=im.data/im_cal.data
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
flat_im = ImageModel(path)
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equal
###Code
check_flat=ratio_im/flat_im.data
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Testing notebook: flat_field step with MIRI ImagingInstruments Affected: NIRCam, NIRSpec, NIRISS, MIRI, FGS Introduction and summary of test being runThis notebook processes an image through calwebb_image2 (calwebb_detector1 is optional) and examines the output of the flat_field step. The steps are as follow:1) Set up data path and directory and image file name.2) Modify header information of input simulations (if needed).3) Run input data through calwebb_detector1. (Can be done prior to running this notebook.)4) Run output of calwebb_detector1 through the flat_field step in calwebb_image2.5) Get flat field reference file. 6) Compare the flat field reference file with the rate/cal image ratio and check that they are the same. DocumentationThe pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwst Data used The data used in this test is a simulated MIRI image created using MIRISim. The documentation for MIRISim can be found here: https://wiki.miricle.org/bin/view/Public/MIRISim_PublicAuthor: T. Temim
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
print(data_dir)
###Output
_____no_output_____
###Markdown
Set up import statements
###Code
from astropy.io import fits, ascii
import pytest
import numpy as np
import jwst
from jwst.pipeline import Detector1Pipeline, Image2Pipeline
from jwst.datamodels import RampModel, ImageModel, dqflags
from jwst.pipeline import calwebb_image2
from ci_watson.artifactory_helpers import get_bigdata
import crds
import os
# Specify CRDS locations and pmap
os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'
os.environ['CRDS_CONTEXT']='jwst_0619.pmap'
os.environ['TEST_BIGDATA']='https://bytesalad.stsci.edu/artifactory/'
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory
###Code
input_file = get_bigdata('jwst_validation_notebooks',
'validation_data',
'source_catalog',
'source_catalog_miri_test',
'det_image_seq1_MIRIMAGE_F560Wexp1_rate.fits')
# Put in new simulation once we're sure MIRISim and the pipeline are using the same input flats.
#input_file = get_bigdata('jwst_validation_notebooks',
# 'validation_data',
# 'flat_field',
# 'flat_field_miri_test',
# 'starfield_50star4ptdither_seq1_MIRIMAGE_F1130Wexp1_rate.fits')
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
im = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Modify header information of input simulations (if needed)
###Code
print(im.meta.wcsinfo.wcsaxes)
im.meta.wcsinfo.wcsaxes=2
print(im.meta.wcsinfo.wcsaxes)
del im.meta.wcsinfo.cdelt3
del im.meta.wcsinfo.crpix3
del im.meta.wcsinfo.crval3
del im.meta.wcsinfo.ctype3
del im.meta.wcsinfo.cunit3
del im.meta.wcsinfo.pc3_1
del im.meta.wcsinfo.pc3_2
###Output
_____no_output_____
###Markdown
Run input data through calwebb_detector1 (not done here)
###Code
#det1 = calwebb_detector1.Detector1Pipeline()
#det1.save_results = True
#det1.run(im)
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through calwebb_image2
###Code
#input_file = input_file.replace('rateint.fits', 'rate.fits')
im2 = calwebb_image2.Image2Pipeline()
#im2.background.skip = True
im2.assign_wcs.skip = True
im2.flat_field.skip = False
im2.photom.skip=True
im2.resample.skip = True
im2.save_results = True
im2.run(im)
input_file = input_file.replace('rate.fits', 'cal.fits')
im_cal = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im=im.data/im_cal.data
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
flat_im = ImageModel(path)
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equal
###Code
check_flat=ratio_im/flat_im.data
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
###Output
_____no_output_____
###Markdown
Both values above should equal to 1.0 Check DQ flagging Any pixel flagged as NON_SCIENCE should also be flagged as DO_NOT_USE. Check if this is in place in both the input reference file and for the output science file of the calwebb_image2 pipeline. If there are no assert errors, the test below passes.
###Code
# Check if the output cal file is flagged properly
# Test that all pixels flagged with NON_SCIENCE are also flagged as DO_NOT_USE
nonsciencearray = (im_cal.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (im_cal.dq & dqflags.pixel['DO_NOT_USE'] > 0)
assert nonsciencearray.all() == badarray.all()
# Test if the input reference file had the flags all set the same way
nonsciencearray = (flat_im.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (flat_im.dq & dqflags.pixel['DO_NOT_USE'] > 0)
assert nonsciencearray.all() == badarray.all()
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Testing notebook: flat_field step with MIRI ImagingInstruments Affected: NIRCam, NIRSpec, NIRISS, MIRI, FGS Introduction This test is designed to test the flat_field step in the calwebb_image2 pipeline. This step retrieves the correct flat field reference file and divides the data by the reference file. The SCI extension of the reference file is divided into the SCI array of the science image. The DQ plane of the reference file is combined with the DQ plane of the science file. Error calculation: The VAR_POISSON and VAR_RNOISE variance arrays of the science exposure are divided by the square of the flat-field value for each pixel. A flat-field variance array, VAR_FLAT, is created from the science exposure and flat-field reference file data using the following formula:var_flat = SCI array ^ 2 / flat SCI array ^ 2 * flat err array ^2The total ERR array in the science exposure is updated as the square root of the quadratic sum of VAR_POISSON, VAR_RNOISE, and VAR_FLAT.Description of the steps applied: - If the science data have been taken using a subarray and the flat-field reference file is a full-frame image, extract the corresponding subarray region from the flat-field data.- Find pixels that have a value of NaN or zero in the FLAT reference file SCI array and set their DQ values to “NO_FLAT_FIELD.”- Reset the values of pixels in the flat that have DQ=”NO_FLAT_FIELD” to 1.0, so that they have no effect when applied to the science data.- Apply the flat by dividing it into the science exposure SCI array.- Propagate the FLAT reference file DQ values into the science exposure DQ array using a bitwise OR operation. DocumentationFor more information on the pipeline step visit the links below. The pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwst Test DescriptionThis notebook processes an image through calwebb_image2 (calwebb_detector1 is optional) and examines the output of the flat_field step. The steps are as follow:1) Retrieve data.2) Run output of calwebb_detector1 through the flat_field step in calwebb_image2. Visualize the sci arrays of the data before and after the flat_field step is applied. 3) Get flat field reference file. Look at the sci array of the flat_field reference file.4) Compare the flat field reference file with the rate/cal image ratio and check that they are the same.5) Look at the ERR arrays of the science data before and after the step is run, and compare to the flat_field reference file ERR array to be sure there is no unexpected pattern seen. Check that a new extension (var_flat) has been added to the output data.6) Check that the DQ flags were applied as expected. Data used The data used in this test is a simulated MIRI image created using MIRISim. The documentation for MIRISim can be found here: https://wiki.miricle.org/bin/view/Public/MIRISim_Public
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
print(data_dir)
###Output
_____no_output_____
###Markdown
Set up import statementsSoftware imports:- astropy allows various data formats to be read in and written out as well as visualization tools for plotting- numpy provides the framework to work with arrays and standard calculations- matplotlib is a set of plotting software- jwst is all of the jwst calibration pipeline software being tested- download_file, move and get_bigdata are used in downloading the data to be used.
###Code
from astropy.io import fits, ascii
from astropy.utils.data import download_file
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from shutil import move
import jwst
from jwst.pipeline import Detector1Pipeline, Image2Pipeline
from jwst.flatfield import FlatFieldStep
from jwst.datamodels import RampModel, ImageModel, dqflags
from jwst.pipeline import calwebb_image2
from ci_watson.artifactory_helpers import get_bigdata
import crds
import os
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory (or Box)
###Code
#input_file = get_bigdata('jwst_validation_notebooks',
# 'validation_data',
# 'flat_field',
# 'flat_field_miri_test',
# 'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits')
from astropy.utils.data import download_file
from pathlib import Path
from shutil import move
from os.path import splitext
def get_box_files(file_list):
for box_url,file_name in file_list:
if 'https' not in box_url:
box_url = 'https://stsci.box.com/shared/static/' + box_url
downloaded_file = download_file(box_url)
if Path(file_name).suffix == '':
ext = splitext(box_url)[1]
file_name += ext
move(downloaded_file, file_name)
#print(file_name)
file_list=[('https://stsci.box.com/shared/static/kzef4nvyzzpfy4x4o108x344qg5epaf0.fits',
'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits')]
get_box_files(file_list)
filename = file_list[0][1]
print(filename)
# Read in data from Box
#file_url = 'https://stsci.box.com/shared/static/kzef4nvyzzpfy4x4o108x344qg5epaf0.fits'
#filename = 'car007_seq1_MIRIMAGE_F770Wexp1_b771_rate.fits'
#input_file = download_file(file_url)
#print(input_file)
#ext = os.path.splitext(file_url)[1]
#new_input_file = input_file + ext
#move(input_file, new_input_file)
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
#im = ImageModel(new_input_file)
#im.save(filename)
im = ImageModel(filename)
im.info() # Make sure image was read into the model correctly and has the expected extensions
###Output
_____no_output_____
###Markdown
Display the rate (slope) file that is output of calwebb_detector1
###Code
plt.figure(figsize=(20,20))
plt.imshow(im.data, cmap='Greys', origin='lower', vmin=-2,vmax=10)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through the flat field step
###Code
im2 = FlatFieldStep()
im2.save_results = True
flatfile = im2.run(im)
# Read output of calwebb_image2 into a data model
im_cal = ImageModel(flatfile)
print(im_cal.meta.filename)
###Output
_____no_output_____
###Markdown
Display the calibrated data that is output of calwebb_image2
###Code
plt.figure(figsize=(20,20))
plt.imshow(im_cal.data, cmap='Greys', origin='lower', vmin=-2,vmax=10)#, norm=norm)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im = im.data / im_cal.data
print('Minimum and maximum values in the ratio image are:', np.nanmin(ratio_im), 'and', np.nanmax(ratio_im))
###Output
_____no_output_____
###Markdown
Display ratio imageThe ratio of the images calculated above should be comparable to the flat field reference file science extension.
###Code
plt.figure(figsize=(20,20))
# mask out DO_NOT_USE values of 1
masked_ratio = np.ma.masked_where((im_cal.dq & dqflags.pixel['DO_NOT_USE'] > 0), ratio_im)
cmap = matplotlib.cm.get_cmap("Greys").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white') # color to mark all DO_NOT_USE pixels
plt.imshow(masked_ratio, cmap=cmap, origin='lower', vmin=0,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
flat_im = ImageModel(path)
print(flat_im.meta.filename)
print('Minimum and maximum values in the ratio image are:', np.nanmin(flat_im.data), 'and', np.nanmax(flat_im.data))
###Output
_____no_output_____
###Markdown
Display flat field reference file data
###Code
plt.figure(figsize=(20,20))
plt.imshow(flat_im.data, cmap='Greys', origin='lower', vmin=0,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equalSince the step sets any flat field values to 1 where the DQ array lists the pixel as DO_NOT_USE, only a masked version of the images should be compared to the flat. Find regions where dq values are not marked as DO_NOT_USE and compare the good regions.
###Code
# Do a check on any specific pixel in the imager. The rate file divided by the flat file should equal the value
# in the flat_fielded output file.
xval = 600
yval = 600
print('Rate image pixel value', im.data[yval, xval])
print('Cal image pixel value', im_cal.data[yval, xval])
print('Flat pixel value', flat_im.data[yval, xval])
print('DQ value for flat file:', flat_im.dq[yval, xval])
div_val = im.data[yval, xval] / flat_im.data[yval, xval]
print('The rate file pixel divided by the flat file pixel is: ', div_val)
try:
assert im_cal.data[yval,xval] == im.data[yval, xval] / flat_im.data[yval, xval]
except:
print('Cal pixel does not equal rate divided by flat. There is a problem here.')
# mask out bad pixels, i.e. pixels marked as DO_NOT_USE in the reference file dq array
badpixels = np.where(flat_im.dq & dqflags.pixel['DO_NOT_USE'] > 0)
# Set bad pixels in images to nan so they are not part of calculations
good_im = im.data
good_im[badpixels] = np.nan
good_cal = im_cal.data
good_cal[badpixels] = np.nan
good_flat = flat_im.data
good_flat[badpixels] = np.nan
# Get the ratio of the masked images, and then divide by the masked flat image
test_ratio = good_im / good_cal
check_flat = test_ratio / good_flat
print('Minimum and maximum values in the ratio image are:', np.nanmin(test_ratio), 'and', np.nanmax(test_ratio))
###Output
_____no_output_____
###Markdown
View the ratio image divided by the flat field ((rate / flat_fielded image) / flat field reference file)The values of this image should be around 1.0. The flat fielded science image results from dividing the rate image by the flat field reference file image. So the ratio of the rate image divided by the flat_fielded image should equal the flat field reference file, meaning that ratio should equal 1.0.
###Code
plt.figure(figsize=(20,20))
plt.imshow(check_flat, cmap='Greys', origin='lower', vmin=0.5,vmax=1.5)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Check that min and max values of ratio image divided by the flat are 1.0
###Code
print('**************** Passing criteria check: Be sure that both of these values are near 1.0 *******')
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
try:
np.testing.assert_allclose(np.nanmin(check_flat), 1.0, rtol = 0.05)
except AssertionError:
print("AssertionError: The minimum value is not within 5% of 1.0")
try:
np.testing.assert_allclose(np.nanmax(check_flat), 1.0, rtol = 0.05)
except AssertionError:
print("AssertionError: The maximum value is not within 5% of 1.0")
###Output
_____no_output_____
###Markdown
Check ERR arraysThere should be a new ERR array (var_flat) attached. Check that var_flat extension was added to data after flat field step was run
###Code
# Look at extensions of the rate file
uncal_filename = str(im.meta.filename)
hdu = fits.open(uncal_filename)
hdu.info()
hdu.close()
# Look at the extensions of cal file and check that a new extension 'var_flat' was added
filename = str(im_cal.meta.filename)
hdu = fits.open(filename)
hdu.info()
hdu.close()
try:
assert(im_cal.var_flat.shape == im_cal.data.shape)
except AssertionError:
print('AssertionError: var_flat array is not the same shape as the data array')
###Output
_____no_output_____
###Markdown
Look at error arrays before and after flat field step to see if there are any unexplained changes
###Code
# ERR array of rate image
print('Min val: ', np.nanmin(im.err), ' Max val: ', np.nanmax(im.err))
plt.figure(figsize=(20,20))
plt.imshow(im.err, cmap='Greys', origin='lower', vmin=0,vmax=.5)
plt.colorbar()
plt.show()
# ERR array of flat_fielded image
print('Min val: ', np.nanmin(im_cal.err), ' Max val: ', np.nanmax(im_cal.err))
plt.figure(figsize=(20,20))
plt.imshow(im_cal.err, cmap='Greys', origin='lower', vmin=0,vmax=.5)
plt.colorbar()
plt.show()
# ERR array of flat reference file image
print('Min val: ', np.nanmin(im.err), ' Max val: ', np.nanmax(im.err))
plt.figure(figsize=(20,20))
plt.imshow(flat_im.err, cmap='Greys', origin='lower', vmin=0,vmax=.002)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Check DQ flagging Any pixel flagged as NON_SCIENCE should also be flagged as DO_NOT_USE. Check if this is in place in both the input reference file and for the output science file of the calwebb_image2 pipeline. If there are no assert errors, the test below passes.
###Code
# Check if the output cal file is flagged properly
# Test that all pixels flagged with NON_SCIENCE are also flagged as DO_NOT_USE
nonsciencearray = (im_cal.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (im_cal.dq & dqflags.pixel['DO_NOT_USE'] > 0)
try:
assert nonsciencearray.all() == badarray.all()
except AssertionError:
print('AssertionError: The NON_SCIENCE pixels are not equal to the DO_NOT_USE pixels in the flat_fielded file.')
# Test if the input reference file had the flags all set the same way
nonsciencearray = (flat_im.dq & dqflags.pixel['NON_SCIENCE'] > 0)
badarray = (flat_im.dq & dqflags.pixel['DO_NOT_USE'] > 0)
try:
assert nonsciencearray.all() == badarray.all()
except AssertionError:
print('AssertionError: The NON_SCIENCE pixels are not equal to the DO_NOT_USE pixels in the input file.')
# Look at DQ planes of rate and cal files and make sure flat field reference file was added to the rate file.
rate_dq = im.dq
cal_dq = im_cal.dq
flat_dq = flat_im.dq
try:
assert cal_dq.all() == rate_dq.all() & flat_dq.all()
except AssertionError:
print('AssertionError: The dq plane of the reference file was not added to the input dq plane properly.')
###Output
_____no_output_____
###Markdown
Look at the dq planes to see how they change.The dq planes shown below show the rate file before the flat field step, the reference file dq plane, and the dq plane after the flat field step is applied.The regions marked with white have been set as 'DO_NOT_USE' in the dq plane. The images below should show that the 4QPM regions are marked as DO_NOT_USE by the flat field step. The rate image dq plane does not remove the 4QPM, but the flat field dq plane and the cal dq plane should both have the 4QPM regions marked as DO_NOT_USE.
###Code
# Look at the dq plane of the rate image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_array = np.ma.masked_where((rate_dq & dqflags.pixel['DO_NOT_USE'] > 0), rate_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_array, cmap=cmap, origin='lower', vmin=0,vmax=200)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Take a look at the dq plane of the flat field reference file.The dq definitions in the flat field file are as follows (from the dq_def extention) Value DQ Name 1 DO_NOT_USE 2 NON_SCIENCE 4 UNRELIABLE_FLAT 8 CDP_PARTIAL_DATA 16 CDP_LOW_QUAL 32 CDP_UNRELIABLE_ERROR 64 NO_FLAT_FIELD 128 DIFF_PATTERN If the pixel has an odd numbered value, it has been combined with the value 'DO_NOT_USE', and is not applied in the division of the science data by the flat. These 'bad' pixels are flagged in the following image by being shown in white. The purple pixels have values of zero, which indicate they are good science pixels.
###Code
# Look at the dq plane of the flat_field image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_flat = np.ma.masked_where((flat_dq & dqflags.pixel['DO_NOT_USE'] > 0), flat_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_flat, cmap=cmap, origin='lower', vmin=0,vmax=5)
plt.colorbar()
plt.show()
# Look at the dq plane of the cal image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_cal = np.ma.masked_where((cal_dq & dqflags.pixel['DO_NOT_USE'] > 0), cal_dq)
cmap = matplotlib.cm.get_cmap("rainbow").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='white')
plt.imshow(masked_cal, cmap=cmap, origin='lower', vmin=0,vmax=200)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Take a look at what portion of the flat fielded image will be masked out in combined (image3 pipeline) dataTake the masked NaN region shown above and apply it to the flat fielded image to see what portion of the image will be masked out once calwebb_image3 is run and the DO_NOT_USE pixels are masked out.
###Code
# Look at the dq plane of the cal image
plt.figure(figsize=(20,20))
# call out DO_NOT_USE values of 1
masked_cal = np.ma.masked_where((cal_dq & dqflags.pixel['DO_NOT_USE'] > 0), im_cal.data)
cmap = matplotlib.cm.get_cmap("Greys").copy() # Can be any colormap that you want after the cm
cmap.set_bad(color='blue') # Mark the DO_NOT_USE pixel color that will be masked out
plt.imshow(masked_cal, cmap=cmap, origin='lower', vmin=-2,vmax=10)
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Testing flat_field step with MIRI simulated data SummaryThis notebook processes an image through calwebb_image2 and calwebb_image3 (calwebb_detector1 is optional) and examines the output table of the source_catalog step. The steps are as follow:1) Set up data path and directory and image file name.2) Modify header information of input simulations (if needed).3) Run input data through calwebb_detector1.4) Run output of calwebb_detector1 through the flat_field step in calwebb_image2.5) Get flat field reference file. 6) Compare the flat field reference file with the rate/cal image ratio and check that they are the same.The pipeline documentation can be found here: https://jwst-pipeline.readthedocs.io/en/latest/The pipeline code is available on GitHub: https://github.com/spacetelescope/jwstAuthor: T. Temim Set up import statements
###Code
import pytest
import numpy as np
from glob import glob
import json
import jwst
from astropy.io import fits, ascii
from astropy.coordinates import Angle
from astropy.table import Table, vstack, unique
from astropy.stats import sigma_clip
from jwst.pipeline import Detector1Pipeline, Image2Pipeline, Image3Pipeline
from jwst.associations import asn_from_list
import matplotlib.pyplot as plt
import random
from jwst import associations
from jwst.datamodels import RampModel
from astropy.io import fits
import numpy as np
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from jwst.pipeline import calwebb_image3
from jwst.pipeline import calwebb_image2
from jwst.pipeline import calwebb_detector1
from astropy.io import fits
from jwst.datamodels import ImageModel
from jwst import datamodels
from astropy.utils.data import get_pkg_data_filename
from ci_watson.artifactory_helpers import get_bigdata
from astropy import table
import crds
import os
os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu'
os.environ['CRDS_CONTEXT']='jwst_0619.pmap'
os.environ['TEST_BIGDATA']='https://bytesalad.stsci.edu/artifactory/'
###Output
_____no_output_____
###Markdown
Print pipeline version number
###Code
jwst.__version__
###Output
_____no_output_____
###Markdown
Read in data from artifactory
###Code
input_file = get_bigdata('jwst_validation_notebooks',
'validation_data',
'source_catalog',
'source_catalog_miri_test',
'det_image_seq1_MIRIMAGE_F560Wexp1_rate.fits')
###Output
_____no_output_____
###Markdown
Read in input image as JWST data model
###Code
from jwst import datamodels
im = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Modify header information of input simulations (if needed)
###Code
print(im.meta.wcsinfo.wcsaxes)
im.meta.wcsinfo.wcsaxes=2
print(im.meta.wcsinfo.wcsaxes)
del im.meta.wcsinfo.cdelt3
del im.meta.wcsinfo.crpix3
del im.meta.wcsinfo.crval3
del im.meta.wcsinfo.ctype3
del im.meta.wcsinfo.cunit3
del im.meta.wcsinfo.pc3_1
del im.meta.wcsinfo.pc3_2
#del im.meta.wcsinfo.cdelt4
#del im.meta.wcsinfo.crpix4
#del im.meta.wcsinfo.crval4
#del im.meta.wcsinfo.ctype4
#del im.meta.wcsinfo.cunit4
###Output
_____no_output_____
###Markdown
Run input data through calwebb_detector1 (not done here)
###Code
#det1 = calwebb_detector1.Detector1Pipeline()
#det1.save_results = True
#det1.run(im)
###Output
_____no_output_____
###Markdown
Run output of calwebb_detector1 through calwebb_image2
###Code
input_file = input_file.replace('rateint.fits', 'rate.fits')
im2 = calwebb_image2.Image2Pipeline()
#im2.background.skip = True
im2.assign_wcs.skip = True
im2.flat_field.skip = False
im2.photom.skip=True
im2.resample.skip = True
im2.save_results = True
im2.run(im)
input_file = input_file.replace('rate.fits', 'cal.fits')
im_cal = ImageModel(input_file)
###Output
_____no_output_____
###Markdown
Calculate the rate/cal image ratio
###Code
ratio_im=im.data/im_cal.data
###Output
_____no_output_____
###Markdown
Get flat_field reference file
###Code
flatreffile = im_cal.meta.ref_file.flat.name
print('Flat reference file', flatreffile)
# find location of file
basename = crds.core.config.pop_crds_uri(flatreffile)
path = crds.locate_file(basename, "jwst")
# open reference file
flat_im = ImageModel(path)
###Output
_____no_output_____
###Markdown
Compare flat field reference file with the rate/cal image ratio and check that they are equal
###Code
check_flat=ratio_im/flat_im.data
print('Minimum and maximum values in the check_flat image are:', np.nanmin(check_flat), 'and', np.nanmax(check_flat))
###Output
_____no_output_____ |
tracking_performance/Survivor_Incomer_Mistrack.ipynb | ###Markdown
Flood Fill the Slides of the Representative Movie:+ **SURVIVORS**: Mark those cells which are correctly tracked to it's root at frame 0 (cyan)+ **INCOMERS**: Separate those cells which are incomers into the FOV during the movie (gold) - tree founder cell must be initiated near FOV boundary *(use 50 px)* - tree founder cell must be successfully tracked for certain time period *(use 50 frames)*+ **MISTRACKS**: Highlight those cells which were mistracked in their tree due to breakage (red)
###Code
import h5py
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
from skimage import io
from skimage.segmentation import flood_fill
###Output
_____no_output_____
###Markdown
Let's nominate which file we want to process:
###Code
hdf5_file = "../example_segment_classif_tracked_movie.hdf5"
###Output
_____no_output_____
###Markdown
Set the thresholds use to categorise which cells will be considered incomers:At this moment, only those tracks which are within **50px distance** to any FOV edge upon appearance will be considered *incomers* if they:+ live at least for **50 frames**, or+ have divided in the meantimeChange appropriately if needed.
###Code
time_threshold = 50
dist_threshold = 50
###Output
_____no_output_____
###Markdown
Process the tracks & assign the survivor (cyan), incomer (yellow) or mistrack (red) label to each tracklet:
###Code
survivor, incomer, mistrack = [], [], []
with h5py.File(hdf5_file, 'r') as f:
ID_list = [item[0] for item in f["tracks"]["obj_type_1"]["LBEPR"]]
# Shorlist all cells which are parents to 2 children: set of such list should be 1/2 of its length:
parents = list(set([item[3] for item in f["tracks"]["obj_type_1"]["LBEPR"] if item[3] != 0]))
# Survivors = founders:
for cell in f["tracks"]["obj_type_1"]["LBEPR"]:
if cell[1] == 0:
survivor.append(cell[0])
# Survivors = progeny:
for cell in f["tracks"]["obj_type_1"]["LBEPR"]:
if cell[4] in survivor:
survivor.append(cell[0])
# Incomers = founders:
for cell in f["tracks"]["obj_type_1"]["LBEPR"]:
if cell[0] not in survivor:
# Must be a founder:
if cell[4] == 0:
# Look at cell coordinates at its frame of appearance:
cell_map = f["tracks"]["obj_type_1"]["map"][ID_list.index(cell[0])]
trk_init = f["tracks"]["obj_type_1"]["tracks"][cell_map[0]]
coo_init = f["objects"]["obj_type_1"]["coords"][trk_init]
# CONDITION #1: distance threshold: x (index=2) or y (index=1) close to FOV borders?
if not (dist_threshold < coo_init[2] < 1600 - dist_threshold and
dist_threshold < coo_init[1] < 1200 - dist_threshold):
# CONDITION #2: time threshold: is the track alive for at least XYZ frames?
if cell[2] - cell[1] + 1 > time_threshold:
incomer.append(cell[0])
else:
# If it doesn't live long enough, it could have divided: is it a parent?
if cell[0] in parents:
incomer.append(cell[0])
else:
mistrack.append(cell[0])
else:
mistrack.append(cell[0])
# Incomers = progeny:
for cell in f["tracks"]["obj_type_1"]["LBEPR"]:
if cell[4] in incomer:
incomer.append(cell[0])
# All other cells must be tracking errors:
for cell in f["tracks"]["obj_type_1"]["LBEPR"]:
if not (cell[0] in survivor or cell[0] in incomer):
if not cell[0] in mistrack:
mistrack.append(cell[0])
###Output
_____no_output_____
###Markdown
Allocate colomap labels to cells, then to the respective coordinated of their objects:+ Survivor = 1+ Incomer = 2+ Mistrack = 3
###Code
object_vector = []
with h5py.File(hdf5_file, 'r') as f:
object_vector = [0 for _ in range(len(f["objects"]["obj_type_1"]["coords"]))]
for e, item in enumerate(ID_list):
if item in survivor:
index = 2
elif item in incomer:
index = 3
elif item in mistrack:
index = 4
else:
raise ValueError
cell_map = f["tracks"]["obj_type_1"]["map"][e]
for trk in f["tracks"]["obj_type_1"]["tracks"][cell_map[0]:cell_map[1]]:
if trk > 0:
object_vector[trk] = index
else:
object_vector[trk] = 1
###Output
_____no_output_____
###Markdown
Define the custom colormap:
###Code
viridis = cm.get_cmap('viridis', 256)
newcolors = viridis(np.linspace(0, 1, 256))
newcolors[0:1, :] = np.array([0/256, 0/256, 0/256, 1])
newcolors[1:2, :] = np.array([150/256, 150/256, 150/256, 1])
newcolors[2:3, :] = np.array([0/256, 255/256, 255/256, 1]) # survivor: cyan
newcolors[3:4, :] = np.array([255/256, 255/256, 0/256, 1]) # incomer: yellow
newcolors[4:5, :] = np.array([255/256, 0/256, 0/256, 1]) # mistrack: red
newcmp = ListedColormap(newcolors[:5])
###Output
_____no_output_____
###Markdown
Fill each object in the frame:
###Code
dr = "" # specify your saving directory, otherwise images are saved to where this notebook is stored
frames = range(0, 800 + 1, 100)
with h5py.File(hdf5_file, 'r') as f:
for frame in frames:
_ = plt.figure(figsize=(24, 18))
img = f["segmentation"]["images"][frame]
# Process the image: label individual objects & store their slices:
lbl_image = sp.ndimage.label(img)[0]
found_objects = sp.ndimage.find_objects(input=lbl_image)
# Map the coordinates:
mp = f["objects"]["obj_type_1"]["map"][frame]
coords_list = f["objects"]["obj_type_1"]["coords"][mp[0]:mp[1]]
fill_list = object_vector[mp[0]:mp[1]]
# Check whether number of detected objects matches found objects in labeled array:
if len(coords_list) != len(found_objects):
raise ValueError
# Iterate:
for e, (obj, lab, slc) in enumerate(zip(coords_list, fill_list, found_objects)):
if not (slc[0].start <= obj[1] <= slc[0].stop and slc[1].start <= obj[2] <= slc[1].stop):
raise Exception("Warning")
# Check if the pixel at the centre of your cell in non-zero:
seed_point = img[int(obj[1])][int(obj[2])]
if seed_point != 0:
flood_fill(image=img, seed_point=(int(obj[1]), int(obj[2])), new_value=lab, in_place=True)
else:
idx = list(lbl_image[slc[0].start][slc[1].start:slc[1].stop]).index(e+1)
seed_point = img[slc[0].start][slc[1].start+idx]
if seed_point != 0:
flood_fill(image=img, seed_point=(slc[0].start, slc[1].start+idx), new_value=lab, in_place=True)
else:
print ("Disaster!")
# Colormap will normalise its range: include corner pixels with different colors:
img[0][0] = 1
img[0][1599] = 2
img[1199][0] = 3
img[1199][1599] = 4
plt.axis("off")
plt.imshow(img, cmap=newcmp)
#plt.imsave(fname=dr+f"frm_{frame}.tiff", arr=img, cmap=newcmp)
###Output
_____no_output_____ |
DAY 401 ~ 500/DAY424_[BaekJoon] 무한 문자열 (Python).ipynb | ###Markdown
2021년 7월 14일 수요일 BaekJoon - 무한 문자열 (Python) 문제 : https://www.acmicpc.net/problem/12871 블로그 : https://somjang.tistory.com/entry/BaekJoon-12871%EB%B2%88-%EB%AC%B4%ED%95%9C-%EB%AC%B8%EC%9E%90%EC%97%B4-Python Solution
###Code
import math
def get_least_common_multiple(string_s_len, string_t_len):
return string_s_len * string_t_len // math.gcd(string_s_len, string_t_len)
def infinity_string(string_s, string_t):
is_infinity_string = 0
string_s_len = len(string_s)
string_t_len = len(string_t)
standard_len = get_least_common_multiple(string_s_len, string_t_len)
s_multiply_num = standard_len // string_s_len
t_multiply_num = standard_len // string_t_len
if string_s * s_multiply_num == string_t * t_multiply_num:
is_infinity_string = 1
return is_infinity_string
if __name__ == "__main__":
string_s = input()
string_t = input()
print(infinity_string(string_s, string_t))
###Output
_____no_output_____ |
REST/Udemy-Django-Python/Django REST Course - 08 Intro To Viewsets.ipynb | ###Markdown
___ ___ What Is A ViewSet?This is the other framwork view. APIView was the other.Just like APIViews, they allow us to write the logic for our endpoints. However - instead of defining functions that map to HTTP methods, Viewsets accept functions that map to common API object actions.- list- create- retrieve- update- partial update- destroyThese Viewsets take care of a lot of the common logic for you.Additional benefits:- perfect for standard database operations- fastest way to make a database interface When To Use ViewSetsMost of the time it comes down to personal preference. Here are some examples for when you need to use Viewsets over APIViews:1. need a simple CRUD (**C**reate **R**ead **U**pdate **D**elete) interface to your DB2. need quick and simple API to manage pre-defined objects3. need little to no customization on the logic4. you are working with standard data structures Create A Simple ViewsetThis section will create the "Hello Viewset".1. Load up your `views.py` file in the **profiles_api** app folder.2. Add the following import above the APIView import: `from rest_framework import viewsets` This is the base module for all the different viewsets that Django REST framework offers. 3. Below your APIView code, create a new class that inherits from: `viewsets.ViewSet`
###Code
class HelloViewSet(viewsets.ViewSet):
"""Test API ViewSet."""
def list(self, request): # typically a HTTP GET to the root of an endpoint
"""Return a hello message."""
a_viewset = [
'Uses actions (list, create, retrieve, update, partial_update)',
'Automatically maps to URLs using Routers',
'Provides more functionality with less code.'
]
return Response({'message': 'Hello!', 'a_viewset': a_viewset})
###Output
_____no_output_____
###Markdown
The functions within this new class are actions you would take on an object - not HTTP calls. Add A URL RouterJust like with APIView, we need to map (or register) our ViewSet to a URL so we can use it in the browser. Django REST has a special Router class that can be used to automatically set up the different routes for our URL to our ViewSet.This is one of the advantages for using a ViewSet over an APIView.Now it's time to add a router to your HelloViewSet.1. Go to your `url.py` file in the profiles_api folder - **not** the one in the profiles_project folder.2. Add/update the following imports at the top: - update `from django.urls import path` to `from django.urls import path, include` - `from rest_framework.routers import DefaultRouter`3. Create your router under the imports & assign it a URL
###Code
router = DefaultRouter()
# register a new URL that points to our HelloViewSet
# input 1 - string is the URL you would like to use
# input 2 - name of the viewset we want to assign/register to the router
# input 3 - base name (used for retrieving URLs in Router)
router.register('hello-viewset', views.HelloViewSet, base_name='hello-viewset')
###Output
_____no_output_____
###Markdown
4. In the urlpatterns, create a new url with a blank string for the RegEx `path('', include(router.urls))` That way the router creates the URLs for you. Testing Our ViewsetEnsure the development server is up and running. It's possible you may have to restart to get latest changes.Go to your browser and the root API URL: `http://127.0.0.1:8000/api/`It should look a little different than last time:It used to say there was no page linked to this URL!If you click the link next to it, you will find the Hello-Viewset you just created.`http://127.0.0.1:8080/api/hello-viewset/` Commit To GitIn your **git bash** program ...1. go to project directory: `cd workspace/PROJECTNAME` (in this example **profiles-rest-api**)2. Call `git add .`3. Call `git commit -am "Added our first viewset and router."` Add Create, Retrieve, Update, Partial Update, And Destroy FunctionsIt's time to complete our ViewSet.1. Locate the `views.py` file in the profiles_api app folder.2. Go to bottom where you added the **HelloViewSet** class. You will now add a serializer class to this class the same way we added it to the **HelloAPIView** class.**NOTE:** ViewSets and APIViews can use the same serializer!3. Below the docstring, add the following: `serializer_class = serializers.HelloSerializer`4. Add the functions to the class.
###Code
# CREATE - takes care of the HTTP POST function
# creates new objects in the system
def create(self, request):
"""Create a new hello message."""
serializer = self.serializer_class(data=request.data)
if serializer.is_valid():
name = serializer.data.get('name')
message = f'Hello {name}!'
return Response({'message': message})
else:
return Response(
serializer.errors, status=status.HTTP_400_BAD_REQUEST)
# RETRIEVE - gets a specific object by it's ID
def retrieve(self, request, pk=None):
"""Handles getting an object by it's ID."""
return Response({'http_method': 'GET'})
# UPDATE - corresponds to the HTTP PUT
def update(self, request, pk=None):
"""Handles updating an object."""
return Response({'http_method': 'PUT'})
# PARTIAL_UPDATE - corresponds to HTTP PATCH method
def partial_update(self, request, pk=None):
"""Handles updating part of an object."""
return Response({'http_method': 'PATCH'})
# DESTROY - corresponds to HTTP DELETE method
def destroy(self, request, pk=None):
"""Handles removing an object."""
return Response({'http_method': 'DELETE'})
###Output
_____no_output_____ |
01_Understanding and Visualizing Data with Python/Week_1 introduction to the field of statistics/01_introduction_jupyter.ipynb | ###Markdown
What is Jupyter Notebooks?Jupyter is a web-based interactive development environment that supports multiple programming languages, however most commonly used with the Python programming language.The interactive environment that Jupyter provides enables students, scientists, and researchers to create reproducible analysis and formulate a story within a single document.Lets take a look at an example of a completed Jupyter Notebook: [Example Notebook](http://nbviewer.jupyter.org/github/cossatot/lanf_earthquake_likelihood/blob/master/notebooks/lanf_manuscript_notebook.ipynb) Jupyter Notebook Features* File Browser* Markdown Cells & Syntax* Kernels, Variables, & Environment* Command vs. Edit Mode & Shortcuts What is Markdown?Markdown is a markup language that uses plain text formatting syntax. This means that we can modify the formatting our text with the use of various symbols on our keyboard as indicators.Some examples include:* Headers* Text modifications such as italics and bold* Ordered and Unordered lists* Links* Tables* Images* Etc.Now I'll showcase some examples of how this formatting is done: Headers: H1 H2 H3 H4 H5 H6 Text modifications:Emphasis, aka italics, with *asterisks* or _underscores_.Strong emphasis, aka bold, with **asterisks** or __underscores__.Combined emphasis with **asterisks and _underscores_**.Strikethrough uses two tildes. ~~Scratch this.~~ Lists:1. First ordered list item2. Another item * Unordered sub-list. 1. Actual numbers don't matter, just that it's a number 1. Ordered sub-list4. And another item.* Unordered list can use asterisks- Or minuses+ Or pluses Links:http://www.umich.edu[The University of Michigan's Homepage](www.http://umich.edu/)To look into more examples of Markdown syntax and features such as tables, images, etc. head to the following link: [Markdown Reference](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) Kernels, Variables, and EnvironmentA notebook kernel is a “computational engine” that executes the code contained in a Notebook document. There are kernels for various programming languages, however we are solely using the python kernel which executes python code.When a notebook is opened, the associated kernel is automatically launched for our convenience.
###Code
### This is python
print("This is a python code cell")
###Output
_____no_output_____
###Markdown
A kernel is the back-end of our notebook which not only executes our python code, but stores our initialized variables.
###Code
### For example, lets initialize variable x
x = 1738
print("x has been set to " + str(x))
### Print x
print(x)
###Output
_____no_output_____
###Markdown
Issues arrise when we restart our kernel and attempt to run code with variables that have not been reinitialized.If the kernel is reset, make sure to rerun code where variables are intialized.
###Code
## We can also run code that accepts input
name = input("What is your name? ")
print("The name you entered is " + name)
###Output
_____no_output_____
###Markdown
It is important to note that Jupyter Notebooks have in-line cell execution. This means that a prior executing cell must complete its operations prior to another cell being executed. A cell still being executing is indicated by the [*] on the left-hand side of the cell.
###Code
print("This won't print until all prior cells have finished executing.")
###Output
_____no_output_____ |
Day 4/Hypothesis_Testing_tutorial1.ipynb | ###Markdown
Hypothesis TestingHypothesis testing is a critical tool in inferential statistics, for determing what the value of a population parameter could be.Statistical hypothesis testing reflects **the scientific method, adapted to the setting of research involving data analysis**. In this framework, a researcher makes a precise statement about the population of interest, then aims to falsify the statement. In statistical hypothesis testing, the statement in question is the **null hypothesis**. If we reject the null hypothesis, we have falsified it (to some degree of confidence). According to the scientific method, falsifying a hypothesis should require an overwhelming amount of evidence against it. If the data we observe are ambiguous, or are only weakly contradictory to the null hypothesis, we do not reject the null hypothesis.Basis of hypothesis testing has two attributes:**Null Hypothesis: $H_0$****Alternative Hypothesis: $H_a$**Various cases which are generally used in hypothesis testing are:* One Population Proportion* Difference in Population Proportions* One Population Mean* Difference in Population MeansThe equation to compute the ***test statistic*** is:$$test\ statistic = \frac{Best\ Estimate - Hypothesized\ Estimate}{Standard\ Error\ of\ Estimate}$$After computing this _test statistic_, we ask ourselves, "How likely is it to see this value of the test statistic under the Null hypothesis?" i.e. we compute a probability value.Depending on that probability, we either **reject or fail to reject the null hypothesis**. Note, we **do not accept the alternate hypothesis** because we can never ovserve all the data in the universe. Type-I and Type-II errorsThe framework of formal hypothesis testing defines two distinct types of errors. A **type I error (false positive)** occurs when the null hypothesis is true but is incorrectly rejected. A **type II error** occurs when the null hypothesis is not rejected when it actually is false. Most traditional methods for statistical inference aim to strictly control the probability of a type I error, usually at 5%. While we also wish to minimize the probability of a type II error, this is a secondary priority to controlling the type I error.Now let us see some widely used hypothesis testing types:- **T-test (Student test)**- **Z-test****t-test**: The t test (also called Student’s T Test) compares two averages (means) and tells you if they are different from each other. The t test also tells you how significant the differences are.There are two types of t-test:- **One sampled t-test**- **Two sampled t-test****One sample t-test**: The One Sample t Test determines whether the sample mean is statistically different from a known or hypothesized population mean. Let's create some dummy age data for the population of voters in the entire country (Senegal) and a sample of voters in Dakar and test the whether the average age of voters Dakar differs from the population:
###Code
%matplotlib inline
import statsmodels.api as sm
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
np.random.seed(6)
population_ages1 = stats.poisson.rvs(loc=18, mu=35, size=150000)
population_ages2 = stats.poisson.rvs(loc=18, mu=10, size=100000)
population_ages = np.concatenate((population_ages1, population_ages2))
Dakar_ages1 = stats.poisson.rvs(loc=18, mu=30, size=30)
Dakar_ages2 = stats.poisson.rvs(loc=18, mu=10, size=20)
Dakar_ages = np.concatenate((Dakar_ages1, Dakar_ages2))
print( population_ages.mean() )
print( Dakar_ages.mean() )
population_ages
###Output
_____no_output_____
###Markdown
Let's conduct a t-test at a 95% confidence level and see if it correctly rejects the null hypothesis that the sample comes from the same distribution as the population. To conduct a one sample t-test, we can the _**stats.ttest_1samp()**_ function:
###Code
stats.ttest_1samp(a= Dakar_ages, # Sample data
popmean= population_ages.mean()) # Pop mean
###Output
_____no_output_____
###Markdown
As the p value = 0.013 < 0.05, we can reject the null hypothesis **Two-Sample T-Test**A two-sample t-test investigates whether the means of two independent data samples differ from one another. In a two-sample test, the null hypothesis is that the means of both groups are the same. Unlike the one sample-test where we test against a known population parameter, the two sample test only involves sample means. You can conduct a two-sample t-test by passing with the _**stats.ttest_ind()**_ function. Difference in Population meanLet's generate a sample of voter age data for Kaolack and test it against the sample we made earlier:
###Code
np.random.seed(12)
Kaolack_ages1 = stats.poisson.rvs(loc=18, mu=33, size=30)
Kaolack_ages2 = stats.poisson.rvs(loc=18, mu=13, size=20)
Kaolack_ages = np.concatenate((Kaolack_ages1, Kaolack_ages2))
print( Kaolack_ages.mean() )
stats.ttest_ind(a= Dakar_ages,
b= Kaolack_ages,
equal_var=False)
###Output
_____no_output_____
###Markdown
If we were using a 95% confidence level we would fail to reject the null hypothesis, since the p-value is greater than the corresponding significance level of 5%. Difference in Population Proportions Research QuestionIs there a significant difference between the population proportions of parents of black children and parents of Hispanic children who report that their child has had some swimming lessons? **Parameter of Interest**: p1 - p2, where p1 = black and p2 = hispanic **Null Hypothesis:** p1 - p2 = 0 **Alternative Hypthosis:** p1 - p2 $\neq$ = 0 **Data**: 247 Parents of Black Children. 36.8% of parents report that their child has had some swimming lessons. 308 Parents of Hispanic Children. 38.9% of parents report that their child has had some swimming lessons. Use of `ttest_ind()` from `statsmodels`Difference in population proportion needs t-test. Also, the population follow a binomial distribution here. We can just pass on the two population quantities with the appropriate binomial distribution parameters to the t-test function.The function returns three values: (a) test statisic, (b) p-value of the t-test, and (c) degrees of freedom used in the t-test.
###Code
n1 = 247
p1 = .37
n2 = 308
p2 = .39
population1 = np.random.binomial(1, p1, n1)
population2 = np.random.binomial(1, p2, n2)
sm.stats.ttest_ind(population1, population2)
###Output
_____no_output_____
###Markdown
Conclusion of the hypothesis testSince the p-value = 0.68 > 0.05, we cannot reject the Null hypothesis in this case i.e. the difference in the population proportions are not statistically significant. But what happens if we could survey much higher number of people?We do not chnage the proportions, just the number of survey participants in the two population. The slight difference in the proportion could become statistically significant in this situation. There is no guarantee that when you run the code, you will get a p-value < 0.05 all the time as the samples are randomly generated each itme. But if you run it a few times, you will notice some p-values < 0.05 for sure.
###Code
n1 = 5000
p1 = .37
n2 = 5000
p2 = .39
population1 = np.random.binomial(1, p1, n1)
population2 = np.random.binomial(1, p2, n2)
sm.stats.ttest_ind(population1, population2)
###Output
_____no_output_____
###Markdown
The p-value is less than 0.05, we reject the null hypothesis Z-testA z-test is a statistical test used to determine whether two population means are different when the variances are known and the sample size is large. The test statistic is assumed to have a normal distribution, and nuisance parameters such as standard deviation should be known in order for an accurate z-test to be performed. One sampled z-testThe one sampled z-test is used to test wether the mean of the population is greater than, less than or not equal to a specific value. One Population Mean Research Question Let's say a cartwheeling competition was organized for some adults. The data looks like following,(80.57, 98.96, 85.28, 83.83, 69.94, 89.59, 91.09, 66.25, 91.21, 82.7 , 73.54, 81.99, 54.01, 82.89, 75.88, 98.32, 107.2 , 85.53, 79.08, 84.3 , 89.32, 86.35, 78.98, 92.26, 87.01)Is the average cartwheel distance (in inches) for adults more than 80 inches?**Population**: All adults **Parameter of Interest**: $\mu$, population mean cartwheel distance.**Null Hypothesis:** $\mu$ = 80 **Alternative Hypthosis**: $\mu$ > 80**Data**:25 adult participants. $\mu = 83.84$$\sigma = 10.72$
###Code
cwdata = np.array([80.57, 98.96, 85.28, 83.83, 69.94, 89.59, 91.09, 66.25, 91.21, 82.7 , 73.54, 81.99, 54.01,
82.89, 75.88, 98.32, 107.2 , 85.53, 79.08, 84.3 , 89.32, 86.35, 78.98, 92.26, 87.01])
n = len(cwdata)
mean = cwdata.mean()
sd = cwdata.std()
(n, mean, sd)
sm.stats.ztest(cwdata, value = 80, alternative = "larger")
###Output
_____no_output_____
###Markdown
Conclusion of the hypothesis testSince the p-value (0.0394) is lower than the standard confidence level 0.05, we can reject the Null hypothesis that the mean cartwheel distance for adults (a population quantity) is equal to 80 inches. Difference in Population Means Research Question Considering adults in the [NHANES data](https://www.cdc.gov/nchs/nhanes/index.htm), do males have a significantly higher mean [Body Mass Index](https://www.cdc.gov/healthyweight/assessing/bmi/index.html) than females?**Population**: Adults in the NHANES data. **Parameter of Interest**: $\mu_1 - \mu_2$, Body Mass Index. **Null Hypothesis:** $\mu_1 = \mu_2$ **Alternative Hypthosis:** $\mu_1 \neq \mu_2$**Data:**2976 Females $\mu_1 = 29.94$ $\sigma_1 = 7.75$ 2759 Male Adults $\mu_2 = 28.78$ $\sigma_2 = 6.25$ $\mu_1 - \mu_2 = 1.16$
###Code
url = "https://raw.githubusercontent.com/kshedden/statswpy/master/NHANES/merged/nhanes_2015_2016.csv"
da = pd.read_csv(url)
da.head()
females = da[da["RIAGENDR"] == 2]
male = da[da["RIAGENDR"] == 1]
n1 = len(females)
mu1 = females["BMXBMI"].mean()
sd1 = females["BMXBMI"].std()
(n1, mu1, sd1)
n2 = len(male)
mu2 = male["BMXBMI"].mean()
sd2 = male["BMXBMI"].std()
(n2, mu2, sd2)
sm.stats.ztest(females["BMXBMI"].dropna(), male["BMXBMI"].dropna(),alternative='two-sided')
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/structured/solutions/2_automl_tables_babyweight.ipynb | ###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____
###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____
###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____
###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____
###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____
###Markdown
LAB 2: AutoML Tables Babyweight Training.**Learning Objectives**1. Setup AutoML Tables1. Create and import AutoML Tables dataset from BigQuery1. Analyze AutoML Tables dataset1. Train AutoML Tables model1. Check evaluation metrics1. Deploy model1. Make batch predictions1. Make online predictions Introduction In this notebook, we will use AutoML Tables to train a model to predict the weight of a baby before it is born. We will use the AutoML Tables UI to create a training dataset from BigQuery and will then train, evaluate, and predict with a Auto ML Tables model.In this lab, we will setup AutoML Tables, create and import an AutoML Tables dataset from BigQuery, analyze AutoML Tables dataset, train an AutoML Tables model, check evaluation metrics of trained model, deploy trained model, and then finally make both batch and online predictions using the trained model.Each learning objective will correspond to a series of steps to complete in this student lab notebook. Verify tables existRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb) to create them.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_augmented_data
LIMIT 0
###Output
_____no_output_____ |
Lecture_4.ipynb | ###Markdown
 Data Science in Medicine using Python Author: Dr Gusztav Belteki 1. Review of homework: slicing and dicing in PythonSubsetting and indexing pandas DataFrames
###Code
a = 42; b = 'Hello World'
dir()
import os
import pandas as pd
filenames = ['CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip', ]
data = {}
for file in filenames:
path = os.path.join('data', file,)
data[file] = pd.read_csv(path)
data
data_all = pd.concat(data)
data_all
import os
import pandas as pd
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip',)
data = pd.read_csv(path)
data
data.info()
data.head(5)
# Select the third row only
selection = data
selection.loc[0]
# Select the "MVe [L/min]" column only"
selection = data['5001|MVe [L/min]']
selection
data.columns
# Select the "MVe [L/min]" and "MVi [L/min]" columns only
columns_to_keep = ['5001|MVe [L/min]', '5001|MVi [L/min]']
selection = data[columns_to_keep]
selection
# Select the 'MVe [L/min]' value from the third row
selection = data.iloc[2]['5001|MVe [L/min]']
selection
type(selection)
###Output
_____no_output_____
###Markdown
End-to-end analysis of tabular (two-dimensional data) using pandas
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
You can also look it up on the internetm
###Code
import os
import pandas as pd
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip',)
data = pd.read_csv(path)
data
len(data)
data.shape
data.ndim
data.info()
round(data.describe(), 2)
data.head()
data.tail()
###Output
_____no_output_____
###Markdown
What is the problem with these data - Indexed by numbers only (uninformative, it should be indexed by date and time)- Date and time are in separate columnns- Date and time formats are not appropriate- Column names are too long and difficult to read- Lots of `na` values - half of every row is empty- Some columns have barely any informative values- Some values are not meaningful (e.g. tidal volume should be mL/kg not mL)*We will deal with all these issues*
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 689436 entries, 0 to 689435
Data columns (total 47 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time [ms] 689436 non-null int64
1 Date 689436 non-null object
2 Time 689436 non-null object
3 Rel.Time [s] 689436 non-null int64
4 5001|MVe [L/min] 344633 non-null float64
5 5001|MVi [L/min] 344633 non-null float64
6 5001|Cdyn [L/bar] 344073 non-null float64
7 5001|R [mbar/L/s] 342099 non-null float64
8 5001|MVespon [L/min] 344633 non-null float64
9 5001|Rpat [mbar/L/s] 341398 non-null float64
10 5001|MVemand [L/min] 344633 non-null float64
11 5001|FlowDev [L/min] 344677 non-null float64
12 5001|VTmand [mL] 344567 non-null float64
13 5001|r2 [no unit] 344538 non-null float64
14 5001|VTispon [mL] 344634 non-null float64
15 5001|Pmin [mbar] 344677 non-null float64
16 5001|Pmean [mbar] 344677 non-null float64
17 5001|PEEP [mbar] 344677 non-null float64
18 5001|RRmand [1/min] 344677 non-null float64
19 5001|PIP [mbar] 344677 non-null float64
20 5001|VTmand [L] 344567 non-null float64
21 5001|VTspon [L] 344634 non-null float64
22 5001|VTemand [mL] 344567 non-null float64
23 5001|VTespon [mL] 344634 non-null float64
24 5001|VTimand [mL] 344571 non-null float64
25 5001|VT [mL] 344567 non-null float64
26 5001|% leak [%] 344633 non-null float64
27 5001|RRspon [1/min] 256267 non-null float64
28 5001|% MVspon [%] 344633 non-null float64
29 5001|MV [L/min] 344633 non-null float64
30 5001|RRtrig [1/min] 344633 non-null float64
31 5001|RR [1/min] 344633 non-null float64
32 5001|I (I:E) [no unit] 344672 non-null float64
33 5001|E (I:E) [no unit] 344672 non-null float64
34 5001|FiO2 [%] 344677 non-null float64
35 5001|VTspon [mL] 344633 non-null float64
36 5001|E [mbar/L] 340217 non-null float64
37 5001|TC [s] 344051 non-null float64
38 5001|TCe [s] 344566 non-null float64
39 5001|C20/Cdyn [no unit] 339993 non-null float64
40 5001|VTe [mL] 344566 non-null float64
41 5001|VTi [mL] 344570 non-null float64
42 5001|EIP [mbar] 344676 non-null float64
43 5001|MVleak [L/min] 344632 non-null float64
44 5001|Tispon [s] 10015 non-null float64
45 5001|I:Espon (I-Part) [no unit] 9071 non-null float64
46 5001|I:Espon (E-Part) [no unit] 9071 non-null float64
dtypes: float64(43), int64(2), object(2)
memory usage: 247.2+ MB
###Markdown
1. Convert the `date` and `time` columns to appropriate format Let us time how long the import takes
###Code
# Let us time how long the import takes
%%time
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
data = pd.read_csv(path)
data
pd.read_csv?
###Output
_____no_output_____
###Markdown
This takes much longer
###Code
%%time
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
lst = ['Date', 'Time']
data = pd.read_csv(path, parse_dates = lst)
data
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 689436 entries, 0 to 689435
Data columns (total 47 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time [ms] 689436 non-null int64
1 Date 689436 non-null datetime64[ns]
2 Time 689436 non-null datetime64[ns]
3 Rel.Time [s] 689436 non-null int64
4 5001|MVe [L/min] 344633 non-null float64
5 5001|MVi [L/min] 344633 non-null float64
6 5001|Cdyn [L/bar] 344073 non-null float64
7 5001|R [mbar/L/s] 342099 non-null float64
8 5001|MVespon [L/min] 344633 non-null float64
9 5001|Rpat [mbar/L/s] 341398 non-null float64
10 5001|MVemand [L/min] 344633 non-null float64
11 5001|FlowDev [L/min] 344677 non-null float64
12 5001|VTmand [mL] 344567 non-null float64
13 5001|r2 [no unit] 344538 non-null float64
14 5001|VTispon [mL] 344634 non-null float64
15 5001|Pmin [mbar] 344677 non-null float64
16 5001|Pmean [mbar] 344677 non-null float64
17 5001|PEEP [mbar] 344677 non-null float64
18 5001|RRmand [1/min] 344677 non-null float64
19 5001|PIP [mbar] 344677 non-null float64
20 5001|VTmand [L] 344567 non-null float64
21 5001|VTspon [L] 344634 non-null float64
22 5001|VTemand [mL] 344567 non-null float64
23 5001|VTespon [mL] 344634 non-null float64
24 5001|VTimand [mL] 344571 non-null float64
25 5001|VT [mL] 344567 non-null float64
26 5001|% leak [%] 344633 non-null float64
27 5001|RRspon [1/min] 256267 non-null float64
28 5001|% MVspon [%] 344633 non-null float64
29 5001|MV [L/min] 344633 non-null float64
30 5001|RRtrig [1/min] 344633 non-null float64
31 5001|RR [1/min] 344633 non-null float64
32 5001|I (I:E) [no unit] 344672 non-null float64
33 5001|E (I:E) [no unit] 344672 non-null float64
34 5001|FiO2 [%] 344677 non-null float64
35 5001|VTspon [mL] 344633 non-null float64
36 5001|E [mbar/L] 340217 non-null float64
37 5001|TC [s] 344051 non-null float64
38 5001|TCe [s] 344566 non-null float64
39 5001|C20/Cdyn [no unit] 339993 non-null float64
40 5001|VTe [mL] 344566 non-null float64
41 5001|VTi [mL] 344570 non-null float64
42 5001|EIP [mbar] 344676 non-null float64
43 5001|MVleak [L/min] 344632 non-null float64
44 5001|Tispon [s] 10015 non-null float64
45 5001|I:Espon (I-Part) [no unit] 9071 non-null float64
46 5001|I:Espon (E-Part) [no unit] 9071 non-null float64
dtypes: datetime64[ns](2), float64(43), int64(2)
memory usage: 247.2 MB
###Markdown
There must be a better way !!!Google: **"How to combine date and time columns in pandas"**
###Code
%%time
path = os.path.join('data', 'CsvLogBase_2020-11-02_134238.904_slow_Measurement.csv.zip')
data = pd.read_csv(path, parse_dates = [['Date', 'Time']])
data
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 689436 entries, 0 to 689435
Data columns (total 46 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date_Time 689436 non-null datetime64[ns]
1 Time [ms] 689436 non-null int64
2 Rel.Time [s] 689436 non-null int64
3 5001|MVe [L/min] 344633 non-null float64
4 5001|MVi [L/min] 344633 non-null float64
5 5001|Cdyn [L/bar] 344073 non-null float64
6 5001|R [mbar/L/s] 342099 non-null float64
7 5001|MVespon [L/min] 344633 non-null float64
8 5001|Rpat [mbar/L/s] 341398 non-null float64
9 5001|MVemand [L/min] 344633 non-null float64
10 5001|FlowDev [L/min] 344677 non-null float64
11 5001|VTmand [mL] 344567 non-null float64
12 5001|r2 [no unit] 344538 non-null float64
13 5001|VTispon [mL] 344634 non-null float64
14 5001|Pmin [mbar] 344677 non-null float64
15 5001|Pmean [mbar] 344677 non-null float64
16 5001|PEEP [mbar] 344677 non-null float64
17 5001|RRmand [1/min] 344677 non-null float64
18 5001|PIP [mbar] 344677 non-null float64
19 5001|VTmand [L] 344567 non-null float64
20 5001|VTspon [L] 344634 non-null float64
21 5001|VTemand [mL] 344567 non-null float64
22 5001|VTespon [mL] 344634 non-null float64
23 5001|VTimand [mL] 344571 non-null float64
24 5001|VT [mL] 344567 non-null float64
25 5001|% leak [%] 344633 non-null float64
26 5001|RRspon [1/min] 256267 non-null float64
27 5001|% MVspon [%] 344633 non-null float64
28 5001|MV [L/min] 344633 non-null float64
29 5001|RRtrig [1/min] 344633 non-null float64
30 5001|RR [1/min] 344633 non-null float64
31 5001|I (I:E) [no unit] 344672 non-null float64
32 5001|E (I:E) [no unit] 344672 non-null float64
33 5001|FiO2 [%] 344677 non-null float64
34 5001|VTspon [mL] 344633 non-null float64
35 5001|E [mbar/L] 340217 non-null float64
36 5001|TC [s] 344051 non-null float64
37 5001|TCe [s] 344566 non-null float64
38 5001|C20/Cdyn [no unit] 339993 non-null float64
39 5001|VTe [mL] 344566 non-null float64
40 5001|VTi [mL] 344570 non-null float64
41 5001|EIP [mbar] 344676 non-null float64
42 5001|MVleak [L/min] 344632 non-null float64
43 5001|Tispon [s] 10015 non-null float64
44 5001|I:Espon (I-Part) [no unit] 9071 non-null float64
45 5001|I:Espon (E-Part) [no unit] 9071 non-null float64
dtypes: datetime64[ns](1), float64(43), int64(2)
memory usage: 242.0 MB
###Markdown
2. Set the `Date_Time` column as index
###Code
data.head()
data.index
list(data.index)[:15]
data = data.set_index('Date_Time') # Do not use inplace modifications
data
data.index
###Output
_____no_output_____
###Markdown
3. Change the clumsy column names
###Code
data_columns = ['Time [ms]', 'Rel.Time [s]', 'MVe [L/min]', 'MVi [L/min]', 'Cdyn [L/bar]', 'R [mbar/L/s]', 'MVespon [L/min]', 'Rpat [mbar/L/s]', 'MVemand [L/min]', 'FlowDev [L/min]', 'VTmand [mL]', 'r2 [no unit]', 'VTispon [mL]', 'Pmin [mbar]', 'Pmean [mbar]', 'PEEP [mbar]', 'RRmand [1/min]', 'PIP [mbar]', 'VTmand [L]', 'VTspon [L]', 'VTemand [mL]', 'VTespon [mL]', 'VTimand [mL]', 'VT [mL]', '% leak [%]', 'RRspon [1/min]', '% MVspon [%]', 'MV [L/min]', 'RRtrig [1/min]', 'RR [1/min]', 'I (I:E) [no unit]', 'E (I:E) [no unit]', 'FiO2 [%]', 'VTspon [mL]', 'E [mbar/L]', 'TC [s]', 'TCe [s]', 'C20/Cdyn [no unit]', 'VTe [mL]', 'VTi [mL]', 'EIP [mbar]', 'MVleak [L/min]', 'Tispon [s]', 'I:Espon (I-Part) [no unit]', 'I:Espon (E-Part) [no unit]']
data.columns
###Output
_____no_output_____
###Markdown
We could just replace it with `data.columns = ['...', '...', '...']` but that is error prone
###Code
# Welcome to list comprehensions
new_columns_1 = [item for item in data.columns]
print(new_columns_1)
new_columns_2 = [item[5:] for item in data.columns]
print(new_columns_2)
new_columns_3 = [item[5:] for item in data.columns if item.startswith('5001')]
print(new_columns_3)
# The expression to the right of `=` is evaluated firs (before assignment)
new_columns_3 = ['Time [ms]', 'Rel.Time [s]'] + new_columns_3
print(new_columns_3)
data.columns = new_columns_3
data.columns
data.head()
###Output
_____no_output_____
###Markdown
4. Combine consecutive columns as they contain complementary data
###Code
data.info()
data.head(10)
# This is called a `hack`
# During `mean() na values are excluded by default
data = data.resample('1S').mean()
data.head(10)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 344677 entries, 2020-11-02 13:42:39 to 2020-11-06 13:27:15
Freq: S
Data columns (total 45 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time [ms] 344677 non-null float64
1 Rel.Time [s] 344677 non-null float64
2 MVe [L/min] 344519 non-null float64
3 MVi [L/min] 344519 non-null float64
4 Cdyn [L/bar] 343959 non-null float64
5 R [mbar/L/s] 341986 non-null float64
6 MVespon [L/min] 344519 non-null float64
7 Rpat [mbar/L/s] 341285 non-null float64
8 MVemand [L/min] 344519 non-null float64
9 FlowDev [L/min] 344563 non-null float64
10 VTmand [mL] 344453 non-null float64
11 r2 [no unit] 344424 non-null float64
12 VTispon [mL] 344520 non-null float64
13 Pmin [mbar] 344563 non-null float64
14 Pmean [mbar] 344563 non-null float64
15 PEEP [mbar] 344563 non-null float64
16 RRmand [1/min] 344563 non-null float64
17 PIP [mbar] 344563 non-null float64
18 VTmand [L] 344453 non-null float64
19 VTspon [L] 344520 non-null float64
20 VTemand [mL] 344453 non-null float64
21 VTespon [mL] 344520 non-null float64
22 VTimand [mL] 344457 non-null float64
23 VT [mL] 344453 non-null float64
24 % leak [%] 344519 non-null float64
25 RRspon [1/min] 256176 non-null float64
26 % MVspon [%] 344519 non-null float64
27 MV [L/min] 344519 non-null float64
28 RRtrig [1/min] 344519 non-null float64
29 RR [1/min] 344519 non-null float64
30 I (I:E) [no unit] 344558 non-null float64
31 E (I:E) [no unit] 344558 non-null float64
32 FiO2 [%] 344563 non-null float64
33 VTspon [mL] 343318 non-null float64
34 E [mbar/L] 338973 non-null float64
35 TC [s] 342758 non-null float64
36 TCe [s] 343260 non-null float64
37 C20/Cdyn [no unit] 338726 non-null float64
38 VTe [mL] 343260 non-null float64
39 VTi [mL] 343265 non-null float64
40 EIP [mbar] 343361 non-null float64
41 MVleak [L/min] 343318 non-null float64
42 Tispon [s] 10011 non-null float64
43 I:Espon (I-Part) [no unit] 9056 non-null float64
44 I:Espon (E-Part) [no unit] 9056 non-null float64
dtypes: float64(45)
memory usage: 121.0 MB
###Markdown
Please recognise that these are already aggregate data !!! The numbers in the same colum do not necessary belong to the same observation (e.g. ventilator inflations) 5. Remove na values
###Code
data.info()
data.shape
# Some columns are almost completely empty and hopeless - drop them
columns_to_drop = ['Tispon [s]', 'I:Espon (I-Part) [no unit]', 'I:Espon (E-Part) [no unit]']
data = data.drop(columns_to_drop, axis = 1)
data.info()
data.isnull().sum()
# How many data points are missing ?
data.isnull().sum()
# How many percent of data is missing
data.isnull().sum() / len(data) * 100
###Output
_____no_output_____
###Markdown
A lot of things are happening here, for example vectorized computation, broadcasting We will speak about them during the next session. 6. Now let us save the processed data A. export them as `csv` files
###Code
%%time
path = os.path.join('results', 'data_processed')
data.to_csv(path)
###Output
CPU times: user 10.5 s, sys: 228 ms, total: 10.7 s
Wall time: 10.8 s
###Markdown
B. export az Excel file This will run for a long time
###Code
%%time
path = os.path.join('data', 'data_processed.xlsx')
data.to_excel(path, sheet_name = 'processed_data')
###Output
_____no_output_____
###Markdown
C. export them as serialised binary data - `pickle`
###Code
%%time
import pickle
path = os.path.join('results', 'data_processed.pickle')
filehandle = open(path, 'wb')
pickle.dump(data, filehandle)
filehandle.close()
%%time
import pickle
path = os.path.join('results', 'data_processed.pickle')
filehandle = open(os.path.join(path), 'rb')
data_processed = pickle.load(filehandle)
filehandle.close()
data_processed
###Output
CPU times: user 29.6 ms, sys: 62.9 ms, total: 92.5 ms
Wall time: 93.3 ms
###Markdown
To be continued... 6. Homework 1. Further explorative analysis on data
###Code
# Select all data during the 1 minute period at 2020-11-03 13:00
selection = data
selection
# Select all data between 2020-11-03 13:00 and and 15:00
selection = data
selection
# Select all data between 2020-11-03 13:00 and and 15:00 and limit it to
# "MVe [L/min]" and "MVi [L/min]" columns only
selection = data
selection
###Output
_____no_output_____
###Markdown
Lecture 4 use SQL Review Python
###Code
demo_str = 'this is my string'
for word_item in demo_str.split():
print(word_item)
print(' {} + {} is {} '.format(1,2,1+2))
###Output
1 + 2 is 3
###Markdown
Install or import libs
###Code
!pip install psycopg2
import pandas
import configparser
import psycopg2
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
###Markdown
Establish connection
###Code
config = configparser.ConfigParser()
config.read('config.ini')
host = config['myaws']['host']
db=config['myaws']['db']
user=config['myaws']['user']
pwd=config['myaws']['pwd']
conn = psycopg2.connect(
host = host,
user = user,
password = pwd,
dbname=db
)
###Output
_____no_output_____
###Markdown
Regression Metrics
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
data = load_boston()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target)
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
expected = y_test
import sklearn.metrics as sm
print(clf.coef_)
# RMSE as the same units as the quantity estimated, can normalize if needed
print("RMSE: %s" % np.sqrt(sm.mean_squared_error(expected, predicted)))
# proportion of variance which is explained by the model
print("R^2: %s" % sm.r2_score(expected, predicted))
# MedAE median absolute error is robust to outliers
print("MedAE: %s" % sm.median_absolute_error(expected, predicted))
# using normalized data (general model fit is the same, but coefficients are
# more easily compared)
print("Using normalized data")
from sklearn.preprocessing import normalize
#normalize using default (l2) normalization
data.data = normalize(data.data)
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target)
clf = LinearRegression()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
expected = y_test
import sklearn.metrics as sm
print(clf.coef_)
# RMSE as the same units as the quantity estimated, can normalize if needed
print("RMSE: %s" % np.sqrt(sm.mean_squared_error(expected, predicted)))
# proportion of variance which is explained by the model
print("R^2: %s" % sm.r2_score(expected, predicted))
# MedAE median absolute error is robust to outliers
print("MedAE: %s" % sm.median_absolute_error(expected, predicted))
###Output
_____no_output_____
###Markdown
Classification metrics
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_breast_cancer
from sklearn import metrics, neighbors, svm, calibration, tree
cancer = load_breast_cancer()
#print(cancer.DESCR)
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=0, test_size=0.25)
def test_model(model):
model.fit(X_train, y_train)
y_predict = model.predict(X_test)
print("%s: %f" % (str(model), metrics.accuracy_score(y_test, y_predict)))
print(metrics.classification_report(y_test, y_predict))
y_predict_proba = model.predict_proba(X_test)
fpr, tpr, _ = metrics.roc_curve(y_test, y_predict_proba[:,1])
roc_auc = metrics.auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
test_model(LogisticRegression())
test_model(neighbors.KNeighborsClassifier(n_neighbors=1))
test_model(neighbors.KNeighborsClassifier(n_neighbors=16))
test_model(svm.SVC(kernel='linear',probability=True))
test_model(calibration.CalibratedClassifierCV(svm.LinearSVC()))
test_model(tree.DecisionTreeClassifier())
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Tuning for Bias / Variance
###Code
# find the best KNeighborsClassifier
from sklearn.model_selection import validation_curve
nneighbors = np.arange(1,51)
train_scores, validation_scores = validation_curve(
neighbors.KNeighborsClassifier(),
cancer.data, cancer.target,
param_name='n_neighbors',
param_range=nneighbors, cv=5)
# Plot the mean train error and validation error across folds
plt.figure(figsize=(6, 4))
plt.plot(nneighbors, validation_scores.mean(axis=1), lw=2,
label='cross-validation')
plt.plot(nneighbors, train_scores.mean(axis=1), lw=2, label='training')
plt.legend(loc='best')
plt.xlabel('number of neighbors')
plt.ylabel('explained variance')
plt.title('Validation curve')
plt.tight_layout()
test_model(neighbors.KNeighborsClassifier(n_neighbors=np.argmax(np.mean(validation_scores, axis=1))+1))
###Output
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=13, p=2,
weights='uniform'): 0.958042
precision recall f1-score support
0 0.96 0.92 0.94 53
1 0.96 0.98 0.97 90
accuracy 0.96 143
macro avg 0.96 0.95 0.95 143
weighted avg 0.96 0.96 0.96 143
###Markdown
Electronic Medical Records
###Code
#download 100patients.zip from google drive
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# 2. Download actual file
file_patients = drive.CreateFile({'id': '1BKA073MRD3WLuHutN-sZug9vtPVgW0Lg'})
file_patients.GetContentFile('100-patients.zip')
!ls
!unzip 100-patients.zip
import pandas as pd
df_patients = pd.read_table('PatientCorePopulatedTable.txt')
df_patients.set_index('PatientID')
df_labs = pd.read_table('LabsCorePopulatedTable.txt')
df_admcore = pd.read_table('AdmissionsCorePopulatedTable.txt')
df_admdiag = pd.read_table('AdmissionsDiagnosesCorePopulatedTable.txt')
tables = [df_patients, df_labs, df_admcore, df_admdiag]
#print(df_patients)
for df in tables:
print(df.info())
print(df.head())
myloid = df_admdiag[df_admdiag['PrimaryDiagnosisCode'].str.startswith("C92")]
print(myloid)
print(df_admdiag[df_admdiag['PrimaryDiagnosisDescription'].str.contains("leukemia")])
print(myloid.merge(df_patients,on='PatientID'))
###Output
PatientID ... PrimaryDiagnosisDescription
2 80AC01B2-BD55-4BE0-A59A-4024104CF4E9 ... Chronic myeloid leukemia, BCR/ABL-positive
25 0681FA35-A794-4684-97BD-00B88370DB41 ... Acute myelomonocytic leukemia, in remission
58 1A40AF35-C6D4-4D46-B475-A15D84E8A9D5 ... Acute myeloblastic leukemia
102 1A8791E3-A61C-455A-8DEE-763EB90C9B2C ... Acute myelomonocytic leukemia, not having achi...
140 8D389A8C-A6D8-4447-9DDE-1A28AB4EC667 ... Chronic myeloid leukemia, BCR/ABL-positive, in...
[5 rows x 4 columns]
PatientID ... PrimaryDiagnosisDescription
2 80AC01B2-BD55-4BE0-A59A-4024104CF4E9 ... Chronic myeloid leukemia, BCR/ABL-positive
4 6A57AC0C-57F3-4C19-98A1-51135EFBC4FF ... Acute lymphoblastic leukemia not having achiev...
25 0681FA35-A794-4684-97BD-00B88370DB41 ... Acute myelomonocytic leukemia, in remission
58 1A40AF35-C6D4-4D46-B475-A15D84E8A9D5 ... Acute myeloblastic leukemia
97 F0B53A2C-98CA-415D-B928-E3FD0E52B22A ... Acute erythroid leukemia, in relapse
102 1A8791E3-A61C-455A-8DEE-763EB90C9B2C ... Acute myelomonocytic leukemia, not having achi...
126 FFCDECD6-4048-4DCB-B910-1218160005B3 ... Hairy cell leukemia, in relapse
133 69CC25ED-A54A-4BAF-97E3-774BB3C9DED1 ... Adult T-cell lymphoma/leukemia (HTLV-1-associa...
138 69B5D2A0-12FD-46EF-A5FF-B29C4BAFBE49 ... Acute erythroid leukemia
140 8D389A8C-A6D8-4447-9DDE-1A28AB4EC667 ... Chronic myeloid leukemia, BCR/ABL-positive, in...
194 B2EB15FA-5431-4804-9309-4215BDC778C0 ... Chronic lymphocytic leukemia of B-cell type
232 98F593D2-8894-49BB-93B9-5A0E2CF85E2E ... Mast cell leukemia, in relapse
241 7A7332AD-88B1-4848-9356-E5260E477C59 ... Acute lymphoblastic leukemia [ALL]
251 03A481F5-B32A-4A91-BD42-43EB78FEBA77 ... Acute megakaryoblastic leukemia, in relapse
274 6623F5D6-D581-4268-9F9B-21612FBBF7B5 ... Mast cell leukemia, in relapse
311 B39DC5AC-E003-4E6A-91B6-FC07625A1285 ... Acute monoblastic/monocytic leukemia, in remis...
[16 rows x 4 columns]
PatientID ... PatientPopulationPercentageBelowPoverty
0 80AC01B2-BD55-4BE0-A59A-4024104CF4E9 ... 19.74
1 0681FA35-A794-4684-97BD-00B88370DB41 ... 19.16
2 1A40AF35-C6D4-4D46-B475-A15D84E8A9D5 ... 11.25
3 1A8791E3-A61C-455A-8DEE-763EB90C9B2C ... 13.97
4 8D389A8C-A6D8-4447-9DDE-1A28AB4EC667 ... 4.34
[5 rows x 10 columns]
|
vui/vui_notebook.ipynb | ###Markdown
Artificial Intelligence Nanodegree Voice User Interfaces Project: Speech Recognition with Neural Networks---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.--- Introduction In this notebook, you will build a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline! Your completed pipeline will accept raw audio as input and return a predicted transcription of the spoken language. The full pipeline is summarized in the figure below.- **STEP 1** is a pre-processing step that converts raw audio to one of two feature representations that are commonly used for ASR. - **STEP 2** is an acoustic model which accepts audio features as input and returns a probability distribution over all potential transcriptions. After learning about the basic types of neural networks that are often used for acoustic modeling, you will engage in your own investigations, to design your own acoustic model!- **STEP 3** in the pipeline takes the output from the acoustic model and returns a predicted transcription. Feel free to use the links below to navigate the notebook:- [The Data](thedata)- [**STEP 1**](step1): Acoustic Features for Speech Recognition- [**STEP 2**](step2): Deep Neural Networks for Acoustic Modeling - [Model 0](model0): RNN - [Model 1](model1): RNN + TimeDistributed Dense - [Model 2](model2): CNN + RNN + TimeDistributed Dense - [Model 3](model3): Deeper RNN + TimeDistributed Dense - [Model 4](model4): Bidirectional RNN + TimeDistributed Dense - [Models 5+](model5) - [Compare the Models](compare) - [Final Model](final)- [**STEP 3**](step3): Obtain Predictions The DataWe begin by investigating the dataset that will be used to train and evaluate your pipeline. [LibriSpeech](http://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a large corpus of English-read speech, designed for training and evaluating models for ASR. The dataset contains 1000 hours of speech derived from audiobooks. We will work with a small subset in this project, since larger-scale data would take a long while to train. However, after completing this project, if you are interested in exploring further, you are encouraged to work with more of the data that is provided [online](http://www.openslr.org/12/).In the code cells below, you will use the `vis_train_features` module to visualize a training example. The supplied argument `index=0` tells the module to extract the first example in the training set. (You are welcome to change `index=0` to point to a different training example, if you like, but please **DO NOT** amend any other code in the cell.) The returned variables are:- `vis_text` - transcribed text (label) for the training example.- `vis_raw_audio` - raw audio waveform for the training example.- `vis_mfcc_feature` - mel-frequency cepstral coefficients (MFCCs) for the training example.- `vis_spectrogram_feature` - spectrogram for the training example. - `vis_audio_path` - the file path to the training example.
###Code
from data_generator import vis_train_features
# extract label and audio features for a single training example
vis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features()
###Output
There are 2136 total training examples.
###Markdown
The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook!
###Code
from IPython.display import Markdown, display
from data_generator import vis_train_features, plot_raw_audio
from IPython.display import Audio
%matplotlib inline
# plot audio signal
plot_raw_audio(vis_raw_audio)
# print length of audio signal
display(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape)))
# print transcript corresponding to audio clip
display(Markdown('**Transcript** : ' + str(vis_text)))
# play the audio file
Audio(vis_audio_path)
###Output
_____no_output_____
###Markdown
STEP 1: Acoustic Features for Speech RecognitionFor this project, you won't use the raw audio waveform as input to your model. Instead, we provide code that first performs a pre-processing step to convert the raw audio to a feature representation that has historically proven successful for ASR models. Your acoustic model will accept the feature representation as input.In this project, you will explore two possible feature representations. _After completing the project_, if you'd like to read more about deep learning architectures that can accept raw audio input, you are encouraged to explore this [research paper](https://pdfs.semanticscholar.org/a566/cd4a8623d661a4931814d9dffc72ecbf63c4.pdf). SpectrogramsThe first option for an audio feature representation is the [spectrogram](https://www.youtube.com/watch?v=_FatxGN3vAM). In order to complete this project, you will **not** need to dig deeply into the details of how a spectrogram is calculated; but, if you are curious, the code for calculating the spectrogram was borrowed from [this repository](https://github.com/baidu-research/ba-dls-deepspeech). The implementation appears in the `utils.py` file in your repository.The code that we give you returns the spectrogram as a 2D tensor, where the first (_vertical_) dimension indexes time, and the second (_horizontal_) dimension indexes frequency. To speed the convergence of your algorithm, we have also normalized the spectrogram. (You can see this quickly in the visualization below by noting that the mean value hovers around zero, and most entries in the tensor assume values close to zero.)
###Code
from data_generator import plot_spectrogram_feature
# plot normalized spectrogram
plot_spectrogram_feature(vis_spectrogram_feature)
# print shape of spectrogram
display(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape)))
###Output
_____no_output_____
###Markdown
Mel-Frequency Cepstral Coefficients (MFCCs)The second option for an audio feature representation is [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). You do **not** need to dig deeply into the details of how MFCCs are calculated, but if you would like more information, you are welcome to peruse the [documentation](https://github.com/jameslyons/python_speech_features) of the `python_speech_features` Python package. Just as with the spectrogram features, the MFCCs are normalized in the supplied code.The main idea behind MFCC features is the same as spectrogram features: at each time window, the MFCC feature yields a feature vector that characterizes the sound within the window. Note that the MFCC feature is much lower-dimensional than the spectrogram feature, which could help an acoustic model to avoid overfitting to the training dataset.
###Code
from data_generator import plot_mfcc_feature
# plot normalized MFCC
plot_mfcc_feature(vis_mfcc_feature)
# print shape of MFCC
display(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape)))
###Output
_____no_output_____
###Markdown
When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below:- This [repository](https://github.com/baidu-research/ba-dls-deepspeech) uses spectrograms.- This [repository](https://github.com/mozilla/DeepSpeech) uses MFCCs.- This [repository](https://github.com/buriburisuri/speech-to-text-wavenet) also uses MFCCs.- This [repository](https://github.com/pannous/tensorflow-speech-recognition/blob/master/speech_data.py) experiments with raw audio, spectrograms, and MFCCs as features. STEP 2: Deep Neural Networks for Acoustic ModelingIn this section, you will experiment with various neural network architectures for acoustic modeling. You will begin by training five relatively simple architectures. **Model 0** is provided for you. You will write code to implement **Models 1**, **2**, **3**, and **4**. If you would like to experiment further, you are welcome to create and train more models under the **Models 5+** heading. All models will be specified in the `sample_models.py` file. After importing the `sample_models` module, you will train your architectures in the notebook.After experimenting with the five simple architectures, you will have the opportunity to compare their performance. Based on your findings, you will construct a deeper architecture that is designed to outperform all of the shallow models.For your convenience, we have designed the notebook so that each model can be specified and trained on separate occasions. That is, say you decide to take a break from the notebook after training **Model 1**. Then, you need not re-execute all prior code cells in the notebook before training **Model 2**. You need only re-execute the code cell below, that is marked with **`RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK`**, before transitioning to the code cells corresponding to **Model 2**.
###Code
#####################################################################
# RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK #
#####################################################################
# allocate 50% of GPU memory (if you like, feel free to change this)
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
# watch for any changes in the sample_models module, and reload it automatically
%load_ext autoreload
%autoreload 2
# import NN architectures for speech recognition
from sample_models import *
# import function for training acoustic model
from train_utils import train_model
###Output
Using TensorFlow backend.
###Markdown
Model 0: RNNGiven their effectiveness in modeling sequential data, the first acoustic model you will use is an RNN. As shown in the figure below, the RNN we supply to you will take the time sequence of audio features as input.At each time step, the speaker pronounces one of 28 possible characters, including each of the 26 letters in the English alphabet, along with a space character (" "), and an apostrophe (').The output of the RNN at each time step is a vector of probabilities with 29 entries, where the $i$-th entry encodes the probability that the $i$-th character is spoken in the time sequence. (The extra 29th character is an empty "character" used to pad training examples within batches containing uneven lengths.) If you would like to peek under the hood at how characters are mapped to indices in the probability vector, look at the `char_map.py` file in the repository. The figure below shows an equivalent, rolled depiction of the RNN that shows the output layer in greater detail. The model has already been specified for you in Keras. To import it, you need only run the code cell below.
###Code
from sample_models import *
model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
rnn (GRU) (None, None, 29) 16617
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 16,617
Trainable params: 16,617
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
As explored in the lesson, you will train the acoustic model with the [CTC loss](http://www.cs.toronto.edu/~graves/icml_2006.pdf) criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architectures as possible :). If you'd like to peek at the implementation details, look at the `add_ctc_loss` function within the `train_utils.py` file in the repository.To train your architecture, you will use the `train_model` function within the `train_utils` module; it has already been imported in one of the above code cells. The `train_model` function takes three **required** arguments:- `input_to_softmax` - a Keras model instance.- `pickle_path` - the name of the pickle file where the loss history will be saved.- `save_model_path` - the name of the HDF5 file where the model will be saved.If we have already supplied values for `input_to_softmax`, `pickle_path`, and `save_model_path`, please **DO NOT** modify these values. There are several **optional** arguments that allow you to have more control over the training process. You are welcome to, but not required to, supply your own values for these arguments.- `minibatch_size` - the size of the minibatches that are generated while training the model (default: `20`).- `spectrogram` - Boolean value dictating whether spectrogram (`True`) or MFCC (`False`) features are used for training (default: `True`).- `mfcc_dim` - the size of the feature dimension to use when generating MFCC features (default: `13`).- `optimizer` - the Keras optimizer used to train the model (default: `SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)`). - `epochs` - the number of epochs to use to train the model (default: `20`). If you choose to modify this parameter, make sure that it is *at least* 20.- `verbose` - controls the verbosity of the training output in the `model.fit_generator` method (default: `1`).- `sort_by_duration` - Boolean value dictating whether the training and validation sets are sorted by (increasing) duration before the start of the first epoch (default: `False`).The `train_model` function defaults to using spectrogram features; if you choose to use these features, note that the acoustic model in `simple_rnn_model` should have `input_dim=161`. Otherwise, if you choose to use MFCC features, the acoustic model should have `input_dim=13`.We have chosen to use `GRU` units in the supplied RNN. If you would like to experiment with `LSTM` or `SimpleRNN` cells, feel free to do so here. If you change the `GRU` units to `SimpleRNN` cells in `simple_rnn_model`, you may notice that the loss quickly becomes undefined (`nan`) - you are strongly encouraged to check this for yourself! This is due to the [exploding gradients problem](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/). We have already implemented [gradient clipping](https://arxiv.org/pdf/1211.5063.pdf) in your optimizer to help you avoid this issue.__IMPORTANT NOTE:__ If you notice that your gradient has exploded in any of the models below, feel free to explore more with gradient clipping (the `clipnorm` argument in your optimizer) or swap out any `SimpleRNN` cells for `LSTM` or `GRU` cells. You can also try restarting the kernel to restart the training process.
###Code
from train_utils import train_model
train_model(input_to_softmax=model_0,
pickle_path='model_0.pickle',
save_model_path='model_0.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 201s 2s/step - loss: 832.3758 - val_loss: 727.4456
Epoch 2/20
106/106 [==============================] - 201s 2s/step - loss: 752.5418 - val_loss: 721.4544
Epoch 3/20
106/106 [==============================] - 200s 2s/step - loss: 751.8712 - val_loss: 730.7420
Epoch 4/20
106/106 [==============================] - 197s 2s/step - loss: 751.5974 - val_loss: 734.5410
Epoch 5/20
106/106 [==============================] - 196s 2s/step - loss: 752.5842 - val_loss: 717.3582
Epoch 6/20
106/106 [==============================] - 197s 2s/step - loss: 751.0046 - val_loss: 726.0532
Epoch 7/20
106/106 [==============================] - 197s 2s/step - loss: 752.4638 - val_loss: 726.9816
Epoch 8/20
106/106 [==============================] - 198s 2s/step - loss: 752.5248 - val_loss: 728.5321
Epoch 9/20
106/106 [==============================] - 197s 2s/step - loss: 751.4070 - val_loss: 725.8266
Epoch 10/20
106/106 [==============================] - 197s 2s/step - loss: 751.6268 - val_loss: 723.0809
Epoch 11/20
106/106 [==============================] - 196s 2s/step - loss: 751.6480 - val_loss: 725.8229
Epoch 12/20
106/106 [==============================] - 196s 2s/step - loss: 751.9354 - val_loss: 724.9918
Epoch 13/20
106/106 [==============================] - 196s 2s/step - loss: 751.0907 - val_loss: 724.6436
Epoch 14/20
106/106 [==============================] - 198s 2s/step - loss: 751.4871 - val_loss: 726.7535
Epoch 15/20
106/106 [==============================] - 196s 2s/step - loss: 751.1905 - val_loss: 722.4764
Epoch 16/20
106/106 [==============================] - 198s 2s/step - loss: 750.6781 - val_loss: 723.2440
Epoch 17/20
106/106 [==============================] - 197s 2s/step - loss: 750.9423 - val_loss: 729.6451
Epoch 18/20
106/106 [==============================] - 198s 2s/step - loss: 750.6979 - val_loss: 724.0448
Epoch 19/20
106/106 [==============================] - 198s 2s/step - loss: 751.4984 - val_loss: 731.4728
Epoch 20/20
106/106 [==============================] - 197s 2s/step - loss: 751.4527 - val_loss: 724.0936
###Markdown
(IMPLEMENTATION) Model 1: RNN + TimeDistributed DenseRead about the [TimeDistributed](https://keras.io/layers/wrappers/) wrapper and the [BatchNormalization](https://keras.io/layers/normalization/) layer in the Keras documentation. For your next architecture, you will add [batch normalization](https://arxiv.org/pdf/1510.01378.pdf) to the recurrent layer to reduce training times. The `TimeDistributed` layer will be used to find more complex patterns in the dataset. The unrolled snapshot of the architecture is depicted below.The next figure shows an equivalent, rolled depiction of the RNN that shows the (`TimeDistrbuted`) dense and output layers in greater detail. Use your research to complete the `rnn_model` function within the `sample_models.py` file. The function should specify an architecture that satisfies the following requirements:- The first layer of the neural network should be an RNN (`SimpleRNN`, `LSTM`, or `GRU`) that takes the time sequence of audio features as input. We have added `GRU` units for you, but feel free to change `GRU` to `SimpleRNN` or `LSTM`, if you like!- Whereas the architecture in `simple_rnn_model` treated the RNN output as the final layer of the model, you will use the output of your RNN as a hidden layer. Use `TimeDistributed` to apply a `Dense` layer to each of the time steps in the RNN output. Ensure that each `Dense` layer has `output_dim` units.Use the code cell below to load your model into the `model_1` variable. Use a value for `input_dim` that matches your chosen audio features, and feel free to change the values for `units` and `activation` to tweak the behavior of your recurrent layer.
###Code
model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
activation='relu')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
rnn (GRU) (None, None, 200) 217200
_________________________________________________________________
bn_rnn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 223,829
Trainable params: 223,429
Non-trainable params: 400
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_1.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_1.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
train_model(input_to_softmax=model_1,
pickle_path='model_1.pickle',
save_model_path='model_1.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 200s 2s/step - loss: 271.5375 - val_loss: 215.2374
Epoch 2/20
106/106 [==============================] - 201s 2s/step - loss: 226.0123 - val_loss: 226.9614
Epoch 3/20
106/106 [==============================] - 198s 2s/step - loss: 225.5427 - val_loss: 214.1131
Epoch 4/20
106/106 [==============================] - 198s 2s/step - loss: 224.8475 - val_loss: 218.0052
Epoch 5/20
106/106 [==============================] - 198s 2s/step - loss: 224.4640 - val_loss: 215.0090
Epoch 6/20
106/106 [==============================] - 197s 2s/step - loss: 224.8359 - val_loss: 210.9995
Epoch 7/20
106/106 [==============================] - 199s 2s/step - loss: 224.5322 - val_loss: 214.6766
Epoch 8/20
106/106 [==============================] - 198s 2s/step - loss: 224.6650 - val_loss: 213.1127
Epoch 10/20
106/106 [==============================] - 198s 2s/step - loss: 224.3817 - val_loss: 213.9965
Epoch 11/20
106/106 [==============================] - 198s 2s/step - loss: 224.4028 - val_loss: 214.6676
Epoch 12/20
106/106 [==============================] - 198s 2s/step - loss: 224.6196 - val_loss: 213.3244
Epoch 13/20
106/106 [==============================] - 198s 2s/step - loss: 224.3800 - val_loss: 212.8022
Epoch 14/20
106/106 [==============================] - 197s 2s/step - loss: 224.2215 - val_loss: 216.0779
Epoch 15/20
106/106 [==============================] - 197s 2s/step - loss: 224.4581 - val_loss: 212.5886
Epoch 16/20
106/106 [==============================] - 198s 2s/step - loss: 224.1820 - val_loss: 213.6726
Epoch 17/20
106/106 [==============================] - 198s 2s/step - loss: 224.1731 - val_loss: 212.3718
Epoch 18/20
106/106 [==============================] - 198s 2s/step - loss: 224.1342 - val_loss: 214.6201
Epoch 19/20
106/106 [==============================] - 198s 2s/step - loss: 224.1194 - val_loss: 211.2834
Epoch 20/20
106/106 [==============================] - 200s 2s/step - loss: 224.0086 - val_loss: 214.5752
###Markdown
(IMPLEMENTATION) Model 2: CNN + RNN + TimeDistributed DenseThe architecture in `cnn_rnn_model` adds an additional level of complexity, by introducing a [1D convolution layer](https://keras.io/layers/convolutional/conv1d). This layer incorporates many arguments that can be (optionally) tuned when calling the `cnn_rnn_model` module. We provide sample starting parameters, which you might find useful if you choose to use spectrogram audio features. If you instead want to use MFCC features, these arguments will have to be tuned. Note that the current architecture only supports values of `'same'` or `'valid'` for the `conv_border_mode` argument.When tuning the parameters, be careful not to choose settings that make the convolutional layer overly small. If the temporal length of the CNN layer is shorter than the length of the transcribed text label, your code will throw an error.Before running the code cell below, you must modify the `cnn_rnn_model` function in `sample_models.py`. Please add batch normalization to the recurrent layer, and provide the same `TimeDistributed` layer as before.
###Code
model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv1d (Conv1D) (None, None, 200) 354400
_________________________________________________________________
bn_conv_1d (BatchNormalizati (None, None, 200) 800
_________________________________________________________________
rnn (SimpleRNN) (None, None, 200) 80200
_________________________________________________________________
bn_rnn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
time_distributed_2 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 442,029
Trainable params: 441,229
Non-trainable params: 800
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_2.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_2.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
train_model(input_to_softmax=model_2,
pickle_path='model_2.pickle',
save_model_path='model_2.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 108s 1s/step - loss: 233.9037 - val_loss: 215.3739
Epoch 2/20
106/106 [==============================] - 55s 515ms/step - loss: 170.1712 - val_loss: 158.1822
Epoch 3/20
106/106 [==============================] - 56s 530ms/step - loss: 149.2637 - val_loss: 145.9997
Epoch 4/20
106/106 [==============================] - 56s 531ms/step - loss: 137.3644 - val_loss: 140.9082
Epoch 5/20
106/106 [==============================] - 56s 526ms/step - loss: 129.7414 - val_loss: 136.5908
Epoch 6/20
106/106 [==============================] - 55s 516ms/step - loss: 123.1647 - val_loss: 136.4024
Epoch 7/20
106/106 [==============================] - 56s 526ms/step - loss: 117.8127 - val_loss: 135.9587
Epoch 8/20
106/106 [==============================] - 55s 515ms/step - loss: 113.5190 - val_loss: 134.1884
Epoch 9/20
106/106 [==============================] - 54s 509ms/step - loss: 109.6732 - val_loss: 130.6733
Epoch 10/20
106/106 [==============================] - 56s 527ms/step - loss: 106.3541 - val_loss: 132.0268
Epoch 11/20
106/106 [==============================] - 56s 528ms/step - loss: 102.8600 - val_loss: 131.0430
Epoch 12/20
106/106 [==============================] - 56s 525ms/step - loss: 99.7022 - val_loss: 133.5133
Epoch 13/20
106/106 [==============================] - 54s 509ms/step - loss: 96.6316 - val_loss: 133.0950
Epoch 14/20
106/106 [==============================] - 54s 506ms/step - loss: 93.8300 - val_loss: 134.7976
Epoch 15/20
106/106 [==============================] - 55s 522ms/step - loss: 91.0680 - val_loss: 134.6847
Epoch 16/20
106/106 [==============================] - 56s 525ms/step - loss: 88.4579 - val_loss: 134.0580
Epoch 17/20
106/106 [==============================] - 56s 527ms/step - loss: 86.0148 - val_loss: 140.3218
Epoch 18/20
106/106 [==============================] - 54s 512ms/step - loss: 83.5431 - val_loss: 139.2986
Epoch 19/20
106/106 [==============================] - 55s 521ms/step - loss: 81.8140 - val_loss: 139.4966
Epoch 20/20
106/106 [==============================] - 55s 523ms/step - loss: 79.2393 - val_loss: 141.2783
###Markdown
(IMPLEMENTATION) Model 3: Deeper RNN + TimeDistributed DenseReview the code in `rnn_model`, which makes use of a single recurrent layer. Now, specify an architecture in `deep_rnn_model` that utilizes a variable number `recur_layers` of recurrent layers. The figure below shows the architecture that should be returned if `recur_layers=2`. In the figure, the output sequence of the first recurrent layer is used as input for the next recurrent layer.Feel free to change the supplied values of `units` to whatever you think performs best. You can change the value of `recur_layers`, as long as your final value is greater than 1. (As a quick check that you have implemented the additional functionality in `deep_rnn_model` correctly, make sure that the architecture that you specify here is identical to `rnn_model` if `recur_layers=1`.)
###Code
model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
gru_1 (GRU) (None, None, 200) 217200
_________________________________________________________________
bn_rnn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
gru_2 (GRU) (None, None, 200) 240600
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 200) 800
_________________________________________________________________
time_distributed_3 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 465,229
Trainable params: 464,429
Non-trainable params: 800
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_3.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_3.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
train_model(input_to_softmax=model_3,
pickle_path='model_3.pickle',
save_model_path='model_3.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 329s 3s/step - loss: 279.0108 - val_loss: 229.9250
Epoch 2/20
106/106 [==============================] - 332s 3s/step - loss: 225.0682 - val_loss: 209.1276
Epoch 3/20
106/106 [==============================] - 333s 3s/step - loss: 215.4607 - val_loss: 206.2111
Epoch 4/20
106/106 [==============================] - 333s 3s/step - loss: 189.7864 - val_loss: 173.4158
Epoch 5/20
106/106 [==============================] - 336s 3s/step - loss: 160.5668 - val_loss: 165.4435
Epoch 6/20
106/106 [==============================] - 335s 3s/step - loss: 146.8373 - val_loss: 149.6192
Epoch 7/20
106/106 [==============================] - 333s 3s/step - loss: 137.9485 - val_loss: 141.9723
Epoch 8/20
106/106 [==============================] - 332s 3s/step - loss: 132.0306 - val_loss: 137.8731
Epoch 9/20
106/106 [==============================] - 334s 3s/step - loss: 127.4622 - val_loss: 134.9801
Epoch 10/20
106/106 [==============================] - 333s 3s/step - loss: 123.9866 - val_loss: 134.0194
Epoch 11/20
106/106 [==============================] - 335s 3s/step - loss: 120.6518 - val_loss: 137.8118
Epoch 12/20
106/106 [==============================] - 335s 3s/step - loss: 118.9663 - val_loss: 133.7554
Epoch 13/20
106/106 [==============================] - 332s 3s/step - loss: 116.0514 - val_loss: 128.0993
Epoch 14/20
106/106 [==============================] - 332s 3s/step - loss: 114.8396 - val_loss: 130.3831
Epoch 15/20
106/106 [==============================] - 333s 3s/step - loss: 112.8482 - val_loss: 133.6612
Epoch 16/20
106/106 [==============================] - 333s 3s/step - loss: 111.7261 - val_loss: 128.3950
Epoch 17/20
106/106 [==============================] - 331s 3s/step - loss: 110.4110 - val_loss: 125.9611
Epoch 18/20
106/106 [==============================] - 334s 3s/step - loss: 109.5875 - val_loss: 124.4582
Epoch 19/20
106/106 [==============================] - 333s 3s/step - loss: 107.5200 - val_loss: 125.6245
Epoch 20/20
106/106 [==============================] - 332s 3s/step - loss: 106.9535 - val_loss: 124.7905
###Markdown
(IMPLEMENTATION) Model 4: Bidirectional RNN + TimeDistributed DenseRead about the [Bidirectional](https://keras.io/layers/wrappers/) wrapper in the Keras documentation. For your next architecture, you will specify an architecture that uses a single bidirectional RNN layer, before a (`TimeDistributed`) dense layer. The added value of a bidirectional RNN is described well in [this paper](http://www.cs.toronto.edu/~hinton/absps/DRNN_speech.pdf).> One shortcoming of conventional RNNs is that they are only able to make use of previous context. In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well. Bidirectional RNNs (BRNNs) do this by processing the data in both directions with two separate hidden layers which are then fed forwards to the same output layer.Before running the code cell below, you must complete the `bidirectional_rnn_model` function in `sample_models.py`. Feel free to use `SimpleRNN`, `LSTM`, or `GRU` units. When specifying the `Bidirectional` wrapper, use `merge_mode='concat'`.
###Code
model_4 = bidirectional_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
bidirectional_1 (Bidirection (None, None, 400) 434400
_________________________________________________________________
time_distributed_4 (TimeDist (None, None, 29) 11629
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 446,029
Trainable params: 446,029
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_4.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_4.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
train_model(input_to_softmax=model_4,
pickle_path='model_4.pickle',
save_model_path='model_4.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 314s 3s/step - loss: 295.9663 - val_loss: 224.9329
Epoch 2/20
106/106 [==============================] - 317s 3s/step - loss: 225.4940 - val_loss: 204.5446
Epoch 3/20
106/106 [==============================] - 317s 3s/step - loss: 211.6394 - val_loss: 197.9675
Epoch 4/20
106/106 [==============================] - 317s 3s/step - loss: 199.5537 - val_loss: 190.3478
Epoch 5/20
106/106 [==============================] - 319s 3s/step - loss: 188.7032 - val_loss: 178.1398
Epoch 6/20
106/106 [==============================] - 316s 3s/step - loss: 178.7231 - val_loss: 170.9624
Epoch 7/20
106/106 [==============================] - 317s 3s/step - loss: 169.6119 - val_loss: 164.3558
Epoch 8/20
106/106 [==============================] - 318s 3s/step - loss: 161.0897 - val_loss: 157.8137
Epoch 9/20
106/106 [==============================] - 317s 3s/step - loss: 153.9279 - val_loss: 154.1404
Epoch 10/20
106/106 [==============================] - 318s 3s/step - loss: 147.2670 - val_loss: 148.8880
Epoch 11/20
106/106 [==============================] - 315s 3s/step - loss: 141.3428 - val_loss: 146.4738
Epoch 12/20
106/106 [==============================] - 317s 3s/step - loss: 136.2316 - val_loss: 143.0241
Epoch 13/20
106/106 [==============================] - 318s 3s/step - loss: 131.6063 - val_loss: 142.2228
Epoch 14/20
106/106 [==============================] - 319s 3s/step - loss: 127.2795 - val_loss: 140.0253
Epoch 15/20
106/106 [==============================] - 318s 3s/step - loss: 122.9308 - val_loss: 139.7404
Epoch 16/20
106/106 [==============================] - 317s 3s/step - loss: 119.3044 - val_loss: 137.9344
Epoch 17/20
106/106 [==============================] - 317s 3s/step - loss: 115.4683 - val_loss: 136.4806
Epoch 18/20
106/106 [==============================] - 318s 3s/step - loss: 112.4107 - val_loss: 137.9953
Epoch 19/20
106/106 [==============================] - 317s 3s/step - loss: 108.9697 - val_loss: 140.7246
Epoch 20/20
106/106 [==============================] - 315s 3s/step - loss: 106.0100 - val_loss: 136.1279
###Markdown
(OPTIONAL IMPLEMENTATION) Models 5+If you would like to try out more architectures than the ones above, please use the code cell below. Please continue to follow the same convention for saving the models; for the $i$-th sample model, please save the loss at **`model_i.pickle`** and saving the trained model at **`model_i.h5`**.
###Code
## (Optional) TODO: Try out some more models!
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Compare the ModelsExecute the code cell below to evaluate the performance of the drafted deep learning models. The training and validation loss are plotted for each model.
###Code
from glob import glob
import numpy as np
import _pickle as pickle
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style(style='white')
# obtain the paths for the saved model history
all_pickles = sorted(glob("results/*.pickle"))
# extract the name of each model
model_names = [item[8:-7] for item in all_pickles]
# extract the loss history for each model
valid_loss = [pickle.load( open( i, "rb" ) )['val_loss'] for i in all_pickles]
train_loss = [pickle.load( open( i, "rb" ) )['loss'] for i in all_pickles]
# save the number of epochs used to train each model
num_epochs = [len(valid_loss[i]) for i in range(len(valid_loss))]
fig = plt.figure(figsize=(16,5))
# plot the training loss vs. epoch for each model
ax1 = fig.add_subplot(121)
for i in range(len(all_pickles)):
ax1.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
train_loss[i], label=model_names[i])
# clean up the plot
ax1.legend()
ax1.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Training Loss')
# plot the validation loss vs. epoch for each model
ax2 = fig.add_subplot(122)
for i in range(len(all_pickles)):
ax2.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
valid_loss[i], label=model_names[i])
# clean up the plot
ax2.legend()
ax2.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
__Question 1:__ Use the plot above to analyze the performance of each of the attempted architectures. Which performs best? Provide an explanation regarding why you think some models perform better than others. __Answer:__ Looking at the graphs we have interesting results. We can see that all models are behaving in the similar way: at the beginning of the training, each model starts by decreasing loss dramaticly and suddenly at the mid of 3rd-5nd epoch it keeps its loss value without decreasing. If we take into account all plots, the best model is 2nd model (CNN). That's why we provide the most information to the RNN part of the network.We can also observe that Models 2, 3 and 4 perform much better than the previous two models. It can partially attributed to the huge number of parameters at about 500k. Below an analysis of each model:- Modelo 0: is a highly simplistic model with a simple GRE recurrent layer. It is not able to capture all the complexities in the data.- Model 1: tries to introduce some complexity by making use of a Batch Normalization layer and a Time Distributed Layer, to correctly capture the variance of the data. And the TimeDistributed layer allows the model to convey the transitions between the temporal slices.- Model 2: had the fastest training times among all the other models. It also seems to overfit a bit as its training loss is less than the validation loss. The fact that this model uses a SimpleRNN layer instead of the GRU layer the other models use might explain this characteristic.- Model 3: seems to be the best performance amongst all the models. Its is consistent on the training as well as the validation sets. This might be due to the addition of an extra RNN layer.- Model 4: introduces a bidirectional RNN layer, which surprisingly does not perform as well as Model 3. One reason for this might be the absence of a Batch Normalization layer. (IMPLEMENTATION) Final ModelNow that you've tried out many sample models, use what you've learned to draft your own architecture! While your final acoustic model should not be identical to any of the architectures explored above, you are welcome to merely combine the explored layers above into a deeper architecture. It is **NOT** necessary to include new layer types that were not explored in the notebook.However, if you would like some ideas for even more layer types, check out these ideas for some additional, optional extensions to your model:- If you notice your model is overfitting to the training dataset, consider adding **dropout**! To add dropout to [recurrent layers](https://faroit.github.io/keras-docs/1.0.2/layers/recurrent/), pay special attention to the `dropout_W` and `dropout_U` arguments. This [paper](http://arxiv.org/abs/1512.05287) may also provide some interesting theoretical background.- If you choose to include a convolutional layer in your model, you may get better results by working with **dilated convolutions**. If you choose to use dilated convolutions, make sure that you are able to accurately calculate the length of the acoustic model's output in the `model.output_length` lambda function. You can read more about dilated convolutions in Google's [WaveNet paper](https://arxiv.org/abs/1609.03499). For an example of a speech-to-text system that makes use of dilated convolutions, check out this GitHub [repository](https://github.com/buriburisuri/speech-to-text-wavenet). You can work with dilated convolutions [in Keras](https://keras.io/layers/convolutional/) by paying special attention to the `padding` argument when you specify a convolutional layer.- If your model makes use of convolutional layers, why not also experiment with adding **max pooling**? Check out [this paper](https://arxiv.org/pdf/1701.02720.pdf) for example architecture that makes use of max pooling in an acoustic model.- So far, you have experimented with a single bidirectional RNN layer. Consider stacking the bidirectional layers, to produce a [deep bidirectional RNN](https://www.cs.toronto.edu/~graves/asru_2013.pdf)!All models that you specify in this repository should have `output_length` defined as an attribute. This attribute is a lambda function that maps the (temporal) length of the input acoustic features to the (temporal) length of the output softmax layer. This function is used in the computation of CTC loss; to see this, look at the `add_ctc_loss` function in `train_utils.py`. To see where the `output_length` attribute is defined for the models in the code, take a look at the `sample_models.py` file. You will notice this line of code within most models:```model.output_length = lambda x: x```The acoustic model that incorporates a convolutional layer (`cnn_rnn_model`) has a line that is a bit different:```model.output_length = lambda x: cnn_output_length( x, kernel_size, conv_border_mode, conv_stride)```In the case of models that use purely recurrent layers, the lambda function is the identity function, as the recurrent layers do not modify the (temporal) length of their input tensors. However, convolutional layers are more complicated and require a specialized function (`cnn_output_length` in `sample_models.py`) to determine the temporal length of their output.You will have to add the `output_length` attribute to your final model before running the code cell below. Feel free to use the `cnn_output_length` function, if it suits your model.
###Code
# specify the model
model_end = final_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=1,
conv_border_mode='valid',
units=200)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv_layer_1 (Conv1D) (None, None, 200) 354400
_________________________________________________________________
dropout_2 (Dropout) (None, None, 200) 0
_________________________________________________________________
conv_bn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
gru_5 (GRU) (None, None, 200) 240600
_________________________________________________________________
rnn_bn_gru_1 (BatchNormaliza (None, None, 200) 800
_________________________________________________________________
time_distributed_5 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 602,429
Trainable params: 601,629
Non-trainable params: 800
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_end.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_end.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
train_model(input_to_softmax=model_end,
pickle_path='model_end.pickle',
save_model_path='model_end.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
106/106 [==============================] - 206s 2s/step - loss: 280.3741 - val_loss: 257.9122
Epoch 2/20
106/106 [==============================] - 204s 2s/step - loss: 228.0922 - val_loss: 210.8335
Epoch 3/20
106/106 [==============================] - 203s 2s/step - loss: 207.2466 - val_loss: 189.8217
Epoch 4/20
106/106 [==============================] - 202s 2s/step - loss: 180.3884 - val_loss: 164.4573
Epoch 5/20
106/106 [==============================] - 203s 2s/step - loss: 161.0185 - val_loss: 151.6160
Epoch 6/20
106/106 [==============================] - 203s 2s/step - loss: 151.1197 - val_loss: 143.7066
Epoch 7/20
106/106 [==============================] - 202s 2s/step - loss: 145.1533 - val_loss: 141.4320
Epoch 8/20
106/106 [==============================] - 202s 2s/step - loss: 139.7824 - val_loss: 136.8355
Epoch 9/20
106/106 [==============================] - 203s 2s/step - loss: 135.2198 - val_loss: 133.1196
Epoch 10/20
106/106 [==============================] - 202s 2s/step - loss: 131.9494 - val_loss: 130.0909
Epoch 11/20
106/106 [==============================] - 203s 2s/step - loss: 129.7081 - val_loss: 128.2077
Epoch 12/20
106/106 [==============================] - 202s 2s/step - loss: 126.8082 - val_loss: 128.6146
Epoch 13/20
106/106 [==============================] - 203s 2s/step - loss: 124.7251 - val_loss: 127.4304
Epoch 14/20
106/106 [==============================] - 201s 2s/step - loss: 122.6122 - val_loss: 125.9810
Epoch 15/20
106/106 [==============================] - 202s 2s/step - loss: 121.3722 - val_loss: 122.8291
Epoch 16/20
106/106 [==============================] - 202s 2s/step - loss: 122.1733 - val_loss: 131.9557
Epoch 17/20
106/106 [==============================] - 202s 2s/step - loss: 120.6360 - val_loss: 125.1846
Epoch 18/20
106/106 [==============================] - 203s 2s/step - loss: 119.4076 - val_loss: 122.5974
Epoch 19/20
106/106 [==============================] - 203s 2s/step - loss: 117.3383 - val_loss: 120.0489
Epoch 20/20
106/106 [==============================] - 202s 2s/step - loss: 115.8139 - val_loss: 118.6194
###Markdown
__Question 2:__ Describe your final model architecture and your reasoning at each step. __Answer:__ I tried to incorporate all the best features of the preview models:- A CNN layer to gain more information from the data (Based on the performance of Model 3). - Use of Dropout and Batch Normalization layers to prevent over-fitting. - Use the GRU RNN and not tweaked standard parameters as I think that the "GRU" with the added complexity performs better than the "SimpleRNN" without too much increase in processing time. - Use a TimeDistributed Dense layer followed by softmax activation in order to compute the logits.And, the further improvements are:- More training data (this usually improves Deep Learning Models)- More convolutional layers (to gain more information from the data)- Incorporate a language model (as mentioned at the end of this notebook) STEP 3: Obtain PredictionsWe have written a function for you to decode the predictions of your acoustic model. To use the function, please execute the code cell below.
###Code
import numpy as np
from data_generator import AudioGenerator
from keras import backend as K
from utils import int_sequence_to_text
from IPython.display import Audio
def get_predictions(index, partition, input_to_softmax, model_path):
""" Print a model's decoded predictions
Params:
index (int): The example you would like to visualize
partition (str): One of 'train' or 'validation'
input_to_softmax (Model): The acoustic model
model_path (str): Path to saved acoustic model's weights
"""
# load the train and test data
data_gen = AudioGenerator()
data_gen.load_train_data()
data_gen.load_validation_data()
# obtain the true transcription and the audio features
if partition == 'validation':
transcr = data_gen.valid_texts[index]
audio_path = data_gen.valid_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
elif partition == 'train':
transcr = data_gen.train_texts[index]
audio_path = data_gen.train_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
else:
raise Exception('Invalid partition! Must be "train" or "validation"')
# obtain and decode the acoustic model's predictions
input_to_softmax.load_weights(model_path)
prediction = input_to_softmax.predict(np.expand_dims(data_point, axis=0))
output_length = [input_to_softmax.output_length(data_point.shape[0])]
pred_ints = (K.eval(K.ctc_decode(
prediction, output_length)[0][0])+1).flatten().tolist()
# play the audio file, and display the true and predicted transcriptions
print('-'*80)
Audio(audio_path)
print('True transcription:\n' + '\n' + transcr)
print('-'*80)
print('Predicted transcription:\n' + '\n' + ''.join(int_sequence_to_text(pred_ints)))
print('-'*80)
###Output
_____no_output_____
###Markdown
Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset.
###Code
get_predictions(index=0,
partition='train',
input_to_softmax=final_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=1,
conv_border_mode='valid',
units=200),
model_path='results/model_end.h5')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv_layer_1 (Conv1D) (None, None, 200) 354400
_________________________________________________________________
dropout_4 (Dropout) (None, None, 200) 0
_________________________________________________________________
conv_bn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
gru_9 (GRU) (None, None, 200) 240600
_________________________________________________________________
rnn_bn_gru_1 (BatchNormaliza (None, None, 200) 800
_________________________________________________________________
time_distributed_7 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 602,429
Trainable params: 601,629
Non-trainable params: 800
_________________________________________________________________
None
--------------------------------------------------------------------------------
True transcription:
the last two days of the voyage bartley found almost intolerable
--------------------------------------------------------------------------------
Predicted transcription:
thelistodas of thef w by fond on motintor be
--------------------------------------------------------------------------------
###Markdown
Use the next code cell to visualize the model's prediction for the first example in the validation dataset.
###Code
get_predictions(index=0,
partition='validation',
input_to_softmax=final_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=1,
conv_border_mode='valid',
units=200),
model_path='results/model_end.h5')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv_layer_1 (Conv1D) (None, None, 200) 354400
_________________________________________________________________
dropout_5 (Dropout) (None, None, 200) 0
_________________________________________________________________
conv_bn (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
gru_11 (GRU) (None, None, 200) 240600
_________________________________________________________________
rnn_bn_gru_1 (BatchNormaliza (None, None, 200) 800
_________________________________________________________________
time_distributed_8 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 602,429
Trainable params: 601,629
Non-trainable params: 800
_________________________________________________________________
None
--------------------------------------------------------------------------------
True transcription:
out in the woods stood a nice little fir tree
--------------------------------------------------------------------------------
Predicted transcription:
ito nho wo stote nyst ni forcrmy
--------------------------------------------------------------------------------
|
ReadNumPy.ipynb | ###Markdown
Reading NumPy Arrays NumPy provides various ways of reading arrays from files, whether they are written as ASCII text ("plain text") or in a specialized binary format. Related functions provide the ability to write arrays out to files. We'll examine a few of those functions here. (For a NumPy array refresher, consult the NumPy Array Tip Sheet.)First, let's look in our current working directory to see what data files we might have to work with. Jupyter notebooks provide a capability for executing commands in the shell, or command line, external to the notebook, by beginning a command with the "!" character (exclamation point, or "bang"). The Linux command "ls" lists the contents in the current directory. We can execute this command from within the notebook by entering and evaluating the expression ```!ls```. (You could also list the contents of the directory with the ```%ls``` magic function, but since we'll be issuing a few additional commands below that don't have magic function equivalents, we might as well use the "!" notation for all of them.) Step 1.Execute the expression in the code cell below and evaluate the results. You will see the name of a data file (with .txt suffix).
###Code
!ls
###Output
data.txt jn.py __pycache__ readNumpy.png
gradebook.db logo.npy ReadNumPy.ipynb
###Markdown
Notice that there is a data file named ```data.txt```. Using the ```head``` command in Linux, we can print out the first 10 lines of the file to see what it consists of. Step 2.Execute the expression in the code cell below and evaluate the results.
###Code
!head data.txt
###Output
52 119
53 119
54 119
55 119
56 119
57 119
58 119
59 119
60 119
61 119
###Markdown
It looks like the file contains a pair of integers on each row, which is something that we could read into a NumPy array. Before doing so, we might want to figure out how many lines are in the file (since if it is a ridiculously large number, we might want to think twice about reading it all in). The ```wc -l``` command in Linux ("wc -l" is short for "word count -lines", i.e., it returns the number of lines in a file). Step 3.Execute the expression in the code cell below, to see how many lines it has.
###Code
!wc -l data.txt
###Output
3569 data.txt
###Markdown
The file is not too big, so let's go ahead and read it in.First, you'll need to ```import numpy as np``` in order to access that module. Then you will want to use the ```np.loadtxt``` function in order to read in the data that is stored in the plain text file ```data.txt```.```np.loadtxt``` requires at least one argument, which is the name of the file that you want to read in. The filename is passed as a Python string, for example, "data.txt". Step 4.The code cell below reads in the contents of the data file and assign it to the variable ```data```.
###Code
import numpy as np
data = np.loadtxt('data.txt')
###Output
_____no_output_____
###Markdown
Step 5.Let's examine some basic attributes of the data array. The code cell below prints both the ```shape``` and the ```dtype``` attributes of the array ```data```.
###Code
print( data.shape )
print( data.dtype )
###Output
(3569, 2)
float64
###Markdown
By default, the ```dtype``` of arrays read in via ```np.loadtxt``` is ```float64``` (floating point numbers). When we peeked at the data file above using the ```head``` command, it looked like the data might contain all integer (int) values. We can refine our call to ```np.loadtxt``` to tell it to read in the data as ```dtype=int``` instead, by supplying that as an additional option to the function call. Step 6.In the code cell below, enter and evaluate a revised version of the call to ```np.loadtxt```, supplying an additional option to read in the data as integers, and assign the result to the variable ```data```. Verify afterwards that the ```dtype``` of ```data``` is now int64 (64-bit integers, or int).
###Code
# YOUR CODE HERE
data = np.loadtxt('data.txt', dtype=int)
print(data.dtype)
###Output
int64
###Markdown
Self-CheckRun the cell below to test the correctness of your code above before submitting for grading.
###Code
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testType
try:
print(testType(data))
except Exception as e:
print("Error!\n" + str(e))
###Output
Correct!
###Markdown
Even though we have read the data in, we don't really know anything about it other than its type and its shape (3569 rows by 2 columns). Step 7.In the code cell below -- recalling that the `axis` parameter can be passed to methods and functions, in order to operate over a specified dimension of an array (for example: `axis=0`, `axis=1` )-- do the following:* write an expression to compute the minimum value in each column, using the ```min``` method of an array, and assign the result to the variable ```minvals```, which should be an array containing two elements (one for each column)* write an expression to compute the maximum value in each column, using the ```max``` method of an array, and assign the result to the variable ```maxvals```, which should be an array containing two elements (one for each column)* print the values of ```minvals``` and ```maxvals``` so you can examine the range of the data
###Code
# YOUR CODE HERE
minvals = np.min()
maxvals = np.max()
print(minvals)
print(maxvals)
###Output
_____no_output_____
###Markdown
Self-CheckRun the cell below to test the correctness of your code in the cell above.
###Code
# Run this self-test cell to check your code; do not add code or delete code in this cell
from jn import testMinvals, testMaxvals
try:
print(testMinvals(minvals))
except Exception as e:
print("Error!\n" + str(e))
try:
print(testMaxvals(maxvals))
except Exception as e:
print("Error!\n" + str(e))
###Output
Error!
name 'minvals' is not defined
Error!
name 'maxvals' is not defined
###Markdown
Sometimes it is useful to plot some new data you are working with, to get a sense of what it looks like. Since the array consists of two columns, it might be helpful to plot one column against the other, i.e., treat one column as "x" data and the other as associated "y" data. Step 8.In the code cell below, fill in the missing code (denoted as ```___```) so that you can plot the first column of the array on the x-axis, and the second column of the array on the y-axis. (Recall that we are plotting columns, which is why we first slice over all rows in the first index with ```:``` ; recall also how we start counting when we are indexing into an array or list.) The code below also specifies a square figure size, since we know from our examination above about ```minvals``` and ```maxvals``` that the range of the data is the same in each dimension.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(8,8))
plt.plot(data[:,___minvals], data[:,___maxvals])
###Output
_____no_output_____
###Markdown
That plot is not particularly informative. It looks like the data points are criss-crossing back and forth across the plane, making it hard to discern a pattern. In such a case, a scatter plot is sometimes useful. Step 9.In the code cell below, fill in the missing code to make a scatter plot of the two columns in the data. (A few extra plotting options have been added to improve the resulting figure.) Note: a self-check will not accompany this exercise.
###Code
# FILL IN THE MISSING CODE BELOW
plt.figure(figsize=(8,8))
plt.scatter(data[:,___], data[:,___], s=3, c='r')
###Output
_____no_output_____
###Markdown
Aha! That's what the data represents! (Your plot should reveal the Cornell logo. If it doesn't go back and fix your plotting commands above until it does.)For what it's worth, the file ```data.txt``` was generated by processing a low-resolution jpeg image of the Cornell logo. Images are built up out of raster data, which are essentially one or more two-dimensional arrays describing the intensity of pixels in the image. Several image processing packages are available to convert images to numpy arrays. We noted above that we can read and write files either from plain text files or binary files. What are "binary files"? They are files in which data are packed in as a sequence of bytes (1 byte = 8 bits), rather than encoded as text characters. The data are packed in some specified format, which means that they can hold data, images, computer programs, etc. But in order to unpack a binary file, you need to know what the format is, i.e., how the data were packed in the first place.NumPy provides the functions ```loadtxt``` and ```savetxt``` for loading (reading) and saving (writing) arrays encoded in plain text files. Similarly, it provides the functions ```load``` and ```save``` for reading and writing arrays in a specific binary format, the so-called NumPy ```.npy``` format. The advantage of saving files to binary format is that the data are stored to full precision, along with information about their dtype, and binary files can be smaller than a text file encoding the same data. The disadvantage of using binary files is that they are less portable to other programming environments; while NumPy knows how to load a .npy file that has been saved, that might not be the case for other libraries or languages. Step 10.The code cell below uses ```np.save``` to save the ```data``` array to a new file named ```logo.npy```.Consult the online documentation for more information on how to use `np.save`.
###Code
np.save('logo.npy', data)
###Output
_____no_output_____
###Markdown
Just because the new file ```logo.npy``` is in binary format doesn't mean we can't look at it, although the results of such a process are not especially meaningful. Step 11.Use the ```head``` command used in Step 2 to look at the first few lines of ```logo.npy```. You should notice that there is some legible metadata encoded with text characters at the top of the file, describing, for example, the shape of the array and its dtype ('<i8' is equivalent to 'int64'). All of the illegible stuff that follows is due to the fact that the binary data describing the array values are being interpreted as text and/or control characters rather than the 64-bit integers (int64) that they actually are.
###Code
# YOUR CODE HERE
!head logo.npy
###Output
�NUMPY v {'descr': '<i8', 'fortran_order': False, 'shape': (3569, 2), }
4 w 5 w 6 w 7 w 8 w 9 w : w ; w < |