path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
temas/IV.optimizacion_convexa_y_machine_learning/4.6.Metodo_de_BL_para_puntos_iniciales_factibles_Python.ipynb | ###Markdown
**Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.```docker run --rm -v :/datos --name jupyterlab_numerical -p 8888:8888 -d palmoreck/jupyterlab_numerical:1.1.0```password para jupyterlab: `qwerty`Detener el contenedor de docker:```docker stop jupyterlab_numerical``` Documentación de la imagen de docker `palmoreck/jupyterlab_numerical:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/numerical). --- Nota generada a partir de [liga1](https://drive.google.com/file/d/1zCIHNAxe5Shc36Qo0XjehHgwrafKSJ_t/view), [liga2](https://drive.google.com/file/d/1RMwUXEN_SOHKue-J9Cx3Ldvj9bejLjiM/view).
###Code
!pip3 install --user -q cvxpy
import os
cur_directory = os.getcwd()
dir_alg_python = '/algoritmos/Python'
os.chdir(cur_directory + dir_alg_python)
import math
import numpy as np
from utils import compute_error
from algorithms_for_cieco import path_following_method_feasible_init_point
###Output
_____no_output_____
###Markdown
Primer ejemplo $$ \min \quad x_1^2 + x_2^2 + x_3^2 + x_4^2 -2x_1-3x_4$$ $$\text{sujeto a: } $$ $$\begin{array}{c}2x_1 + x_2 + x_3 + 4x_4 = 7 \\x_1 + x_2 + 2x_3 + x_4 = 6\end{array}$$ $$x_1, x_2, x_3, x_4 \geq 0$$
###Code
fo = lambda x: x[0]**2 + x[1]**2 + x[2]**2 + x[3]**2-2*x[0]-3*x[3]
const = {0: lambda x: -x[0],
1: lambda x: -x[1],
2: lambda x: -x[2],
3: lambda x: -x[3]
}
A= np.array([[2,1,1,4],
[1,1,2,1]])
b=np.array([7,6])
x_ast=np.array([1.1232876712328763,0.6506849315068493,
1.8287671232876714,0.5684931506849317])
x_0 = np.array([8.082191780821915e-01,
8.767123287671235e-01,
1.821917808219178e+00,
6.712328767123281e-01])
x_0
p_ast=fo(x_ast)
p_ast
tol_outer_iter = 1e-6
tol=1e-8
tol_backtracking=1e-12
maxiter=30
mu=10
[x,iter_barrier,t] = path_following_method_feasible_init_point(fo, A, const,
x_0, tol,
tol_backtracking, x_ast, p_ast, maxiter,
mu, tol_outer_iter = tol_outer_iter
)
[x,iter_barrier,t]
compute_error(x_ast,x)
###Output
_____no_output_____
###Markdown
Comparación con [cvxpy](https://github.com/cvxgrp/cvxpy)
###Code
import cvxpy as cp
x1 = cp.Variable()
x2 = cp.Variable()
x3 = cp.Variable()
x4 = cp.Variable()
# Create two constraints.
constraints = [2*x1+x2+x3+4*x4-7 == 0,x1+x2+2*x3+x4-6 == 0,x1>=0,x2>=0,x3>=0,x4>=0]
# Form objective.
obj = cp.Minimize(x1**2+x2**2+x3**2+x4**2-2*x1-3*x4)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve() # Returns the optimal value.
print("status:", prob.status)
print("optimal value", prob.value)
print("optimal var", x1.value, x2.value, x3.value,x4.value)
###Output
status: optimal
optimal value 1.4006849315068515
optimal var 1.1232876712328763 0.6506849315068494 1.8287671232876717 0.5684931506849316
###Markdown
Segundo ejemplo $$\min 2x_1 + 5x_2$$ $$\text{sujeto a: }$$ $$\begin{array}{c}6-x_1-x_2 \leq 0 \\-18 + x_1 +2x_2 \leq 0\\x_1, x_2 \geq 0\end{array}$$
###Code
fo = lambda x: 2*x[0] + 5*x[1]
const = {0: lambda x: 6-x[0]-x[1],
1: lambda x: -18+x[0]+2*x[1],
2: lambda x: -x[0],
3: lambda x: -x[1]
}
A=np.array([0,0],dtype=float)
b = 0
x_ast = np.array([6,0], dtype=float)
x_0 = np.array([4,4], dtype=float)
p_ast=fo(x_ast)
p_ast
tol_outer_iter = 1e-3
tol=1e-8
tol_backtracking=1e-12
maxiter=30
mu=10
[x,iter_barrier,t] = path_following_method_feasible_init_point(fo, A, const,
x_0, tol,
tol_backtracking, x_ast, p_ast, maxiter,
mu, tol_outer_iter=tol_outer_iter
)
[x,iter_barrier,t]
compute_error(x_ast,x)
###Output
_____no_output_____
###Markdown
Comparación con [cvxpy](https://github.com/cvxgrp/cvxpy)
###Code
x1 = cp.Variable()
x2 = cp.Variable()
# Create two constraints.
constraints = [6-x1-x2 <= 0,-18+x1+2*x2<=0,x1>=0,x2>=0]
# Form objective.
obj = cp.Minimize(2*x1+5*x2)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve() # Returns the optimal value.
print("status:", prob.status)
print("optimal value", prob.value)
print("optimal var", x1.value, x2.value)
###Output
status: optimal
optimal value 12.0000000016275
optimal var 6.000000000175689 2.552244387851183e-10
|
2-afd-model-setup.ipynb | ###Markdown
Part 2: Setting an Amazon Fraud Detector model
###Code
# Uncomment and install s3fs, this is required to read CSV files from S3 directly into Pandas dataframe
# Once installed, please restart the Notebook Kernel (Kernel > Restart Kernel) before proceeding
#%pip install s3fs
###Output
_____no_output_____
###Markdown
Overview * [Notebook 1: Data Preparation, Process, and Store Features](./1-data-analysis-prep.ipynb)* **[Notebook 2: Amazon Fraud Detector Model Setup](./2-afd-model-setup.ipynb)** * **[Introduction](intro)** * **[Setup Notebook](setup)** * **[Set AFD Entity type, event type, and Detector names](entity)** * **[Profile Your Dataset](profile)** * **[Create Labels, Variables, Entity and Event Types](labels)** * **[Conclusion](conclusion)*** [Notebook 3: Model training, deployment, real-time and batch inference](./3-afd-model-train-deploy.ipynb)* [Notebook 4: Create an end-to-end pipeline](./4-afd-pipeline.ipynb) 1. Introduction ___overview Amazon Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts. Fraud Detector capitalizes on the latest advances in machine learning (ML) and 20 years of fraud detection expertise from AWS and Amazon.com to automatically identify potentially fraudulent activity so you can catch more fraud faster.In this notebook, we'll use the Amazon Fraud Detector API to define an entity and event of interest and use CSV data stored in S3 to train a model. Next, we'll derive some rules and create a "detector" by combining our entity, event, model, and rules into a single endpoint. Finally, we'll apply the detector to a sample of our data to identify potentially fraudulent events.After running this notebook you should be able to:* Define an Entity and Event* Create a Detector* Train a Machine Learning (ML) Model* Author Rules to identify potential fraud based on the model's score* Apply the Detector's "predict" function, to generate a model score and rule outcomes on dataIf you would like to know more, please check out [Fraud Detector's Documentation](https://docs.aws.amazon.com/frauddetector/latest/ug/what-is-frauddetector.html).To create models within Amazon Fraud Detector, you must provide data for training. This data has input features (defined by variables) and output labels (defined by labels in the Amazon Fraud Detector service). Additionally, you define events based on the type of entities sending the data for predictions. The following diagram shows the sequence of component creation followed in this tutorial. IAM Permissions---To use Amazon Fraud Detector, you have to set up permissions that allow access to the Amazon Fraud Detector console and API operations. You also have to allow Amazon Fraud Detector to perform tasks on your behalf and to access resources that you own. The following policies provide the required permission to use Amazon Fraud Detector:* `AmazonFraudDetectorFullAccessPolicy` Allows you to perform the following actions: - Access all Amazon Fraud Detector resources - List and describe all model endpoints in Amazon SageMaker - List all IAM roles in the account - List all Amazon S3 buckets - Allow IAM Pass Role to pass a role to Amazon Fraud Detector * `AmazonS3FullAccess` Allows full access to Amazon S3. This is required to upload training files to S3.In this case we will assign `AmazonFraudDetectorFullAccessPolicy` and `AmazonS3FullAccess` policies to the SageMaker Execution Role. Plan Plan a Fraud Detector---A Detector contains the event, model(s) and rule(s) detection logic for a particular type of fraud that you want to detect. We'll use the following 7 step process to plan a Fraud Detector:* Setup your notebook - Name the major components `entity`, `entity type`, `model`, `detector` - Get IAM role ARN - S3 Bucket with your training data CSV File* Read and Profile your Data - This will give you an idea of what your dataset contains - This will also identify the variables and labels that will need to be created to define your event* Create event variables and labels - This will create the variables and labels in fraud detector* Define your Entity and Event Type - What is the activity that you are detecting? That's likely your Event Type (e.g., account_registration) - Who is performing this activity? That's likely your Entity (e.g., customer)* Create and Train your Model - Model training takes anywhere from 45-60 minutes - Promote your model once training is complete* Create Detector, generate Rules and assemble your Detector - Create your detector - Create rules based on your model scores - Define outcomes (e.g., fraud, investigate and approve) - Assemble your detector by adding your model and rules to it* Test your Detector - Interactively call predict on a handful of records 2. Setup your Notebook ---overview1. Name the major components of Fraud Detector2. Get IAM role ARN 3. S3 Bucket with your training data CSV FileThen you can interactively exeucte the code cells in the notebook, no need to change anything unless you want to. 💡 Fraud Detector ComponentsEVENT_TYPE is a business activity that you want evaluated for fraud risk. ENTITY_TYPE represents the "what or who" that is performing the event you want to evaluate. MODEL_NAME is the name of your supervised machine learning model that Fraud Detector trains on your behalf. DETECTOR_NAME is the name of the detector that contains the detection logic (model and rules) that you apply to events that you want to evaluate for fraud.We will import some necessary libraries that will be used throughout this notebook.
###Code
from IPython.core.display import display, HTML
from IPython.display import clear_output, JSON
display(HTML("<style>.container { width:90% }</style>"))
# ------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import sys
import time
import json
import uuid
from datetime import datetime
import boto3
import sagemaker
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
###Output
_____no_output_____
###Markdown
Set region, boto3 and SageMaker SDK variables We will initialize a Fraud Detector, S3 and Sagemaker Boto3 client objects.
###Code
#You can change this to a region of your choice
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
# -- initialize S3 Client
s3_client = boto3.client('s3', region_name=region)
# -- initialize the AFD client
client = boto3.client('frauddetector')
sagemaker_boto_client = boto_session.client('sagemaker')
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session,
sagemaker_client=sagemaker_boto_client)
# -- suffix is appended to detector and model name for uniqueness
sufx = datetime.now().strftime("%Y%m%d")
###Output
_____no_output_____
###Markdown
We will get the SageMaker Execution Role
###Code
print('SageMaker Role:', sagemaker.get_execution_role().split('/')[-1])
ARN_ROLE = sagemaker.get_execution_role()
%store ARN_ROLE
###Output
SageMaker Role: AmazonSageMaker-ExecutionRole-20201030T135016
Stored 'ARN_ROLE' (str)
###Markdown
Set S3 training data file locationWe will now initialize a variable with the file path of our training data. If you have stepped through and executed the [1-data-analysis-prep.ipynb](./1-data-analysis-prep.ipynb) notebook, you should have your final training data set CSV uploaded into a location in S3. If not, you may use the training dataset that is included in the `data/` directory `afd_training_data.csv`. Executing the subsequent code cells will initialize S3 related variables and pull variables stored in Jupyter's local cache in case you have executed the previous notebook ([1-data-analysis-prep.ipynb](./1-data-analysis-prep.ipynb)). Once you replace `YOUR_PREFIX_GOES_HERE` with your S3 prefix it will check if the file exists in the S3 path, if not it will upload the provided training data to the default S3 location as defined.
###Code
S3_FILE = "afd_training_data.csv"
training_prefix = "training_data"
###Output
_____no_output_____
###Markdown
✅ Replace YOUR_PREFIX_GOES_HEREwith your S3 bucket prefix where your training data CSV file resides in the code cell below.
###Code
from datetime import datetime
current_time = datetime.now()
if 'afd_bucket' in globals():
%store -r afd_bucket
%store -r afd_prefix
S3_BUCKET = afd_bucket
print(f'{current_time}: Using default bucket: {S3_BUCKET}... Initialized folder {S3_BUCKET}/{afd_prefix}/{training_prefix}')
else:
print(f'{current_time}: Bucket name not in local cache initializing')
# initialize with sagemaker default bucket and a prefix where your training data is located
afd_bucket = sagemaker_session.default_bucket()
afd_prefix = YOUR_PREFIX_GOES_HERE # ---> Add your prefix here
%store afd_bucket
%store afd_prefix
S3_BUCKET = afd_bucket
print(f'{current_time}: Bucket {S3_BUCKET}... Initialized folder {afd_prefix}/{training_prefix}')
current_time = datetime.now()
try:
# Check if the file exists in the said S3 bucket/prefix location
objects_in_bucket = s3_client.list_objects(Bucket=S3_BUCKET, Prefix=f"{afd_prefix}/{training_prefix}/{S3_FILE}")
print(f"{current_time}: File {objects_in_bucket['Contents'][0]['Key']} found")
S3_FILE_LOC = f"s3://{S3_BUCKET}/{afd_prefix}/{training_prefix}/{S3_FILE}"
%store S3_FILE_LOC
print(f"{current_time}: S3 Location initalized ... s3://{S3_BUCKET}/{afd_prefix}/{training_prefix}/{S3_FILE}")
except Exception as e:
print(f"{current_time}: File {afd_prefix}/{training_prefix}/{S3_FILE} not found uploading from local...")
print(f"{current_time}: Upoading File {afd_prefix}/{training_prefix}/{S3_FILE} to s3://{S3_BUCKET} ...")
# Upload the training data from local to the S3 bucket
s3_client.upload_file(Filename=f'data/{S3_FILE}', Bucket=S3_BUCKET, Key=f'{afd_prefix}/{training_prefix}/{S3_FILE}')
S3_FILE_LOC = f"s3://{S3_BUCKET}/{afd_prefix}/{training_prefix}/{S3_FILE}"
%store S3_FILE_LOC
###Output
2021-05-13 19:30:26.497863: File amazon-fraud-detector/training_data/afd_training_data.csv found
Stored 'S3_FILE_LOC' (str)
2021-05-13 19:30:26.497863: S3 Location initalized ... s3://sagemaker-us-east-2-965425568475/amazon-fraud-detector/training_data/afd_training_data.csv
###Markdown
3. Set AFD Entity type, event type, and Detector names ---overview
###Code
ENTITY_TYPE = "afd_demo_entity_{0}".format(sufx)
ENTITY_DESC = "AFD Entity: {0}".format(sufx)
EVENT_TYPE = "afd_demo_event_{0}".format(sufx)
EVENT_DESC = "AFD Event Type: {0}".format(sufx)
MODEL_NAME = "afd_demo_model_{0}".format(sufx)
MODEL_DESC = "AFD model trained on: {0}".format(sufx)
DETECTOR_NAME = "afd_detector_{0}".format(sufx)
DETECTOR_DESC = "Detects synthetic fraud events created: {0}".format(sufx)
# store name in cache
%store ENTITY_TYPE
%store ENTITY_DESC
%store EVENT_TYPE
%store EVENT_DESC
%store MODEL_NAME
%store MODEL_DESC
%store DETECTOR_NAME
%store DETECTOR_DESC
###Output
Stored 'ENTITY_TYPE' (str)
Stored 'ENTITY_DESC' (str)
Stored 'EVENT_TYPE' (str)
Stored 'EVENT_DESC' (str)
Stored 'MODEL_NAME' (str)
Stored 'MODEL_DESC' (str)
Stored 'DETECTOR_NAME' (str)
Stored 'DETECTOR_DESC' (str)
###Markdown
4. Profile Your Dataset -----overviewA small profiler utility function `summary_stats()` is defined in the `data_profiler.py` file. The function will: * Profile your data, creating descriptive statistics * Perform basic data quality checks (nulls, unique variables, etc.), and * return summary statistics and the EVENT and MODEL schemas used to define your EVENT_TYPE and TRAIN your MODEL.
###Code
import sys
import s3fs # This is required to read CSV data directly from S3 into Pandas dataframe
# Import profiler function
sys.path.insert(0, './')
from data_profiler import summary_stats
###Output
_____no_output_____
###Markdown
💡 Note: If you make changes to the data_profiler.py script after you execute the code cell above, please make sure to restart the Kernel (Kernel > Restart Kernel) and run the notebook again.
###Code
# Load the Training data set in a dataframe
df = pd.read_csv(S3_FILE_LOC)
df.describe()
# ------
# Alternate: If the code above fails to execute then comment the above two lines
# and uncomment the lines below and execute this cell again
# fs = s3fs.S3FileSystem(anon=False)
# with fs.open(S3_FILE_LOC) as f:
# df = pd.read_csv(f)
# -----
df_stats, trainingDataSchema, eventVariables, eventLabels = summary_stats(df)
%store trainingDataSchema
%store eventVariables
###Output
_____no_output_____
###Markdown
5. Create Labels, Variables, Entity and Event Types -----overview1. **Events and Event Types** An event is a business activity that is evaluated for fraud risk. With Amazon Fraud Detector, you generate fraud predictions for events. An event type defines the structure for an event sent to Amazon Fraud Detector. This includes the variables sent as part of the event, the entity performing the event (such as a customer), and the labels that classify the event. Example event types include online payment transactions, account registrations, and authentication.2. **Entity and Entity Type** An entity represents who is performing the event. As part of a fraud prediction, you can pass the entity ID to indicate the specific entity who performed the event. An entity type classifies the entity. Example classifications include customer, merchant, or account. Before we can create Evnet and Entity types we must create a Labels and Variables 3. **Label** A label classifies an event as fraudulent or legitimate. Labels are used to train supervised machine learning models in Amazon Fraud Detector. 4. **Variable** A variable represents a data element associated with an event that you want to use in a fraud prediction. Variables can either be sent with an event as part of a fraud prediction or derived, such as the output of an Amazon Fraud Detector model or Amazon SageMaker model. In this case we will create variables based on the input features in our training dataset and their corresponding datatypes.For more information, refer to the [documentation](https://docs.aws.amazon.com/frauddetector/latest/ug/frauddetector-ml-concepts.html). 5.1 Create Label and Variables---We are going to use the [PutLabel](https://docs.aws.amazon.com/frauddetector/latest/api/API_PutLabel.html) API to create labels for the Fraud Detector model. A label classifies an event as fraudulent or legitimate. Labels are associated with event types and used to train supervised machine learning models in Amazon Fraud Detector.
###Code
try:
fraud_lbl = client.put_label(
name = "fraud",
description = 'fraud')
legit_lbl = client.put_label(
name = "legit",
description = 'legit')
print(f"Labels have been created")
display(JSON(fraud_lbl))
display(JSON(legit_lbl))
except Exception as e:
print(e)
###Output
Labels have been created
###Markdown
We have a small helper function which will look through our data set stats and create the variables required for AFD Model. This function uses the [CreateVariable](https://docs.aws.amazon.com/frauddetector/latest/api/API_CreateVariable.html) API.
###Code
def create_variables(df_stats, MODEL_NAME):
"""
Returns a variable list of model input variables, checks to see if variable exists,
and, if not, then it adds the variable to Fraud Detector
Arguments:
enrichment_features -- dictionary of optional features, mapped to specific variable types enriched (CARD_BIN, USERAGENT)
numeric_features -- optional list of numeric field names
categorical_features -- optional list of categorical features
Returns:
variable_list -- a list of variable dictionaries
"""
enrichment_features = df_stats.loc[(df_stats['feature_type'].isin(['IP_ADDRESS', 'EMAIL_ADDRESS']))].to_dict(orient="record")
numeric_features = df_stats.loc[(df_stats['feature_type'].isin(['NUMERIC']))]['feature_name'].to_dict()
categorical_features = df_stats.loc[(df_stats['feature_type'].isin(['CATEGORY']))]['feature_name'].to_dict()
variable_list = []
# -- first do the enrichment features
for feature in enrichment_features:
variable_list.append( {'name' : feature['feature_name']})
try:
resp = client.get_variables(name=feature['feature_name'])
except:
print("Creating variable: {0}".format(feature['feature_name']))
resp = client.create_variable(
name = feature['feature_name'],
dataType = 'STRING',
dataSource ='EVENT',
defaultValue = '<unknown>',
description = feature['feature_name'],
variableType = feature['feature_type'] )
# -- check and update the numeric features
for feature in numeric_features:
variable_list.append( {'name' : numeric_features[feature]})
try:
resp = client.get_variables(name=numeric_features[feature])
except:
print("Creating variable: {0}".format(numeric_features[feature]))
resp = client.create_variable(
name = numeric_features[feature],
dataType = 'FLOAT',
dataSource ='EVENT',
defaultValue = '0.0',
description = numeric_features[feature],
variableType = 'NUMERIC' )
# -- check and update the categorical features
for feature in categorical_features:
variable_list.append( {'name' : categorical_features[feature]})
try:
resp = client.get_variables(name=categorical_features[feature])
except:
print("Creating variable: {0}".format(categorical_features[feature]))
resp = client.create_variable(
name = categorical_features[feature],
dataType = 'STRING',
dataSource ='EVENT',
defaultValue = '<unknown>',
description = categorical_features[feature],
variableType = 'CATEGORICAL' )
return variable_list
###Output
_____no_output_____
###Markdown
Call the function to create the variables.
###Code
# Call the create_variables function
model_variables = create_variables(df_stats, MODEL_NAME)
# Display output
display(HTML("<h4>Model variable dict</h4>"))
display(JSON(model_variables))
###Output
_____no_output_____
###Markdown
5.2 Create Entity and Event Types---We will use the [PutEntityType](https://docs.aws.amazon.com/frauddetector/latest/api/API_PutEntityType.html) API to create Entity type. The code checks if entity type exists, if not, it creates one.
###Code
try:
response = client.get_entity_types( name = ENTITY_TYPE )
display(HTML("<h4>Entity already exists</h4>"))
display(JSON(response))
except Exception as e:
print(f"Entity {ENTITY_TYPE} does not exist" )
response = client.put_entity_type(
name = ENTITY_TYPE,
description = ENTITY_DESC
)
display(HTML("<h4>Created entity</h4>"))
display(JSON(response))
###Output
_____no_output_____
###Markdown
and we will use the [PutEventType](https://docs.aws.amazon.com/frauddetector/latest/api/API_PutEventType.html) API to create Event type. The code checks if event type exists, if not, it creates one.
###Code
try:
response = client.get_event_types( name = EVENT_TYPE )
display(HTML("<h4>Event type already exists</h4>"))
display(JSON(response))
except Exception as e:
print(f"Event {EVENT_TYPE} does not exist" )
response = client.put_event_type (
name = EVENT_TYPE,
eventVariables = eventVariables,
labels = eventLabels,
entityTypes = [ENTITY_TYPE])
display(HTML("<h4>Created event type</h4>"))
display(JSON(response))
###Output
_____no_output_____ |
Notebooks/CS_224N_ww_classifier.ipynb | ###Markdown
CS 224N Lecture 3: Word Window Classification Pytorch Exploration Author: Matthew Lamm
###Code
import pprint
import torch
import torch.nn as nn
pp = pprint.PrettyPrinter()
###Output
_____no_output_____
###Markdown
Our DataThe task at hand is to assign a label of 1 to words in a sentence that correspond with a LOCATION, and a label of 0 to everything else. In this simplified example, we only ever see spans of length 1.
###Code
train_sents = [s.lower().split() for s in ["we 'll always have Paris",
"I live in Germany",
"He comes from Denmark",
"The capital of Denmark is Copenhagen"]]
train_labels = [[0, 0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1, 0, 1]]
assert all([len(train_sents[i]) == len(train_labels[i]) for i in range(len(train_sents))])
test_sents = [s.lower().split() for s in ["She comes from Paris"]]
test_labels = [[0, 0, 0, 1]]
assert all([len(test_sents[i]) == len(test_labels[i]) for i in range(len(test_sents))])
###Output
_____no_output_____
###Markdown
Creating a dataset of batched tensors. PyTorch (like other deep learning frameworks) is optimized to work on __tensors__, which can be thought of as a generalization of vectors and matrices with arbitrarily large rank.Here well go over how to translate data to a list of vocabulary indices, and how to construct *batch tensors* out of the data for easy input to our model. We'll use the *torch.utils.data.DataLoader* object handle ease of batching and iteration. Converting tokenized sentence lists to vocabulary indices.Let's assume we have the following vocabulary:
###Code
id_2_word = ["<pad>", "<unk>", "we", "always", "have", "paris",
"i", "live", "in", "germany",
"he", "comes", "from", "denmark",
"the", "of", "is", "copenhagen"]
word_2_id = {w:i for i,w in enumerate(id_2_word)}
instance = train_sents[0]
print(instance)
def convert_tokens_to_inds(sentence, word_2_id):
return [word_2_id.get(t, word_2_id["<unk>"]) for t in sentence]
token_inds = convert_tokens_to_inds(instance, word_2_id)
pp.pprint(token_inds)
###Output
[2, 1, 3, 4, 5]
###Markdown
Let's convince ourselves that worked:
###Code
print([id_2_word[tok_idx] for tok_idx in token_inds])
###Output
['we', '<unk>', 'always', 'have', 'paris']
###Markdown
Padding for windows. In the word window classifier, for each word in the sentence we want to get the +/- n window around the word, where 0 <= n < len(sentence).In order for such windows to be defined for words at the beginning and ends of the sentence, we actually want to insert padding around the sentence before converting to indices:
###Code
def pad_sentence_for_window(sentence, window_size, pad_token="<pad>"):
return [pad_token]*window_size + sentence + [pad_token]*window_size
window_size = 2
instance = pad_sentence_for_window(train_sents[0], window_size)
print(instance)
###Output
['<pad>', '<pad>', 'we', "'ll", 'always', 'have', 'paris', '<pad>', '<pad>']
###Markdown
Let's make sure this works with our vocabulary:
###Code
for sent in train_sents:
tok_idxs = convert_tokens_to_inds(pad_sentence_for_window(sent, window_size), word_2_id)
print([id_2_word[idx] for idx in tok_idxs])
###Output
['<pad>', '<pad>', 'we', '<unk>', 'always', 'have', 'paris', '<pad>', '<pad>']
['<pad>', '<pad>', 'i', 'live', 'in', 'germany', '<pad>', '<pad>']
['<pad>', '<pad>', 'he', 'comes', 'from', 'denmark', '<pad>', '<pad>']
['<pad>', '<pad>', 'the', '<unk>', 'of', 'denmark', 'is', 'copenhagen', '<pad>', '<pad>']
###Markdown
Batching sentences together with a DataLoader When we train our model, we rarely update with respect to a single training instance at a time, because a single instance provides a very noisy estimate of the global loss's gradient. We instead construct small *batches* of data, and update parameters for each batch. Given some batch size, we want to construct batch tensors out of the word index lists we've just created with our vocab.For each length B list of inputs, we'll have to: (1) Add window padding to sentences in the batch like we just saw. (2) Add additional padding so that each sentence in the batch is the same length. (3) Make sure our labels are in the desired format.At the level of the dataest we want: (4) Easy shuffling, because shuffling from one training epoch to the next gets rid of pathological batches that are tough to learn from. (5) Making sure we shuffle inputs and their labels together! PyTorch provides us with an object *torch.utils.data.DataLoader* that gets us (4) and (5). All that's required of us is to specify a *collate_fn* that tells it how to do (1), (2), and (3).
###Code
l = torch.LongTensor(train_labels[0])
pp.pprint(("raw train label instance", l))
print(l.size())
one_hots = torch.zeros((2, len(l)))
pp.pprint(("unfilled label instance", one_hots))
print(one_hots.size())
one_hots[1] = l
pp.pprint(("one-hot labels", one_hots))
l_not = ~l.byte()
one_hots[0] = l_not
pp.pprint(("one-hot labels", one_hots))
from torch.utils.data import DataLoader
from functools import partial
def my_collate(data, window_size, word_2_id):
"""
For some chunk of sentences and labels
-add winow padding
-pad for lengths using pad_sequence
-convert our labels to one-hots
-return padded inputs, one-hot labels, and lengths
"""
x_s, y_s = zip(*data)
# deal with input sentences as we've seen
window_padded = [convert_tokens_to_inds(pad_sentence_for_window(sentence, window_size), word_2_id)
for sentence in x_s]
# append zeros to each list of token ids in batch so that they are all the same length
padded = nn.utils.rnn.pad_sequence([torch.LongTensor(t) for t in window_padded], batch_first=True)
# convert labels to one-hots
labels = []
lengths = []
for y in y_s:
lengths.append(len(y))
label = torch.zeros((len(y),2 ))
true = torch.LongTensor(y)
false = ~true.byte()
label[:, 0] = false
label[:, 1] = true
labels.append(label)
padded_labels = nn.utils.rnn.pad_sequence(labels, batch_first=True)
return padded.long(), padded_labels, torch.LongTensor(lengths)
# Shuffle True is good practice for train loaders.
# Use functools.partial to construct a partially populated collate function
example_loader = DataLoader(list(zip(train_sents,
train_labels)),
batch_size=2,
shuffle=True,
collate_fn=partial(my_collate, window_size=2, word_2_id=word_2_id))
for batched_input, batched_labels, batch_lengths in example_loader:
pp.pprint(("inputs", batched_input, batched_input.size()))
pp.pprint(("labels", batched_labels, batched_labels.size()))
pp.pprint(batch_lengths)
break
###Output
('inputs',
tensor([[ 0, 0, 2, 1, 3, 4, 5, 0, 0],
[ 0, 0, 10, 11, 12, 13, 0, 0, 0]]),
torch.Size([2, 9]))
('labels',
tensor([[[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[0., 1.],
[0., 0.]]]),
torch.Size([2, 5, 2]))
tensor([5, 4])
###Markdown
Modeling Thinking through vectorization of word windows.Before we go ahead and build our model, let's think about the first thing it needs to do to its inputs.We're passed batches of sentences. For each sentence i in the batch, for each word j in the sentence, we want to construct a single tensor out of the embeddings surrounding word j in the +/- n window.Thus, the first thing we're going to need a (B, L, 2N+1) tensor of token indices. A *terrible* but nevertheless informative *iterative* solution looks something like the following, where we iterate through batch elements in our (dummy), iterating non-padded word positions in those, and for each non-padded word position, construct a window:
###Code
dummy_input = torch.zeros(2, 8).long()
dummy_input[:,2:-2] = torch.arange(1,9).view(2,4)
pp.pprint(dummy_input)
dummy_output = [[[dummy_input[i, j-2+k].item() for k in range(2*2+1)]
for j in range(2, 6)]
for i in range(2)]
dummy_output = torch.LongTensor(dummy_output)
print(dummy_output.size())
pp.pprint(dummy_output)
###Output
torch.Size([2, 4, 5])
tensor([[[0, 0, 1, 2, 3],
[0, 1, 2, 3, 4],
[1, 2, 3, 4, 0],
[2, 3, 4, 0, 0]],
[[0, 0, 5, 6, 7],
[0, 5, 6, 7, 8],
[5, 6, 7, 8, 0],
[6, 7, 8, 0, 0]]])
###Markdown
*Technically* it works: For each element in the batch, for each word in the original sentence and ignoring window padding, we've got the 5 token indices centered at that word. But in practice will be crazy slow. Instead, we ideally want to find the right tensor operation in the PyTorch arsenal. Here, that happens to be __Tensor.unfold__.
###Code
dummy_input.unfold(1, 2*2+1, 1)
###Output
_____no_output_____
###Markdown
A model in full. In PyTorch, we implement models by extending the nn.Module class. Minimally, this requires implementing an *\_\_init\_\_* function and a *forward* function.In *\_\_init\_\_* we want to store model parameters (weights) and hyperparameters (dimensions).
###Code
class SoftmaxWordWindowClassifier(nn.Module):
"""
A one-layer, binary word-window classifier.
"""
def __init__(self, config, vocab_size, pad_idx=0):
super(SoftmaxWordWindowClassifier, self).__init__()
"""
Instance variables.
"""
self.window_size = 2*config["half_window"]+1
self.embed_dim = config["embed_dim"]
self.hidden_dim = config["hidden_dim"]
self.num_classes = config["num_classes"]
self.freeze_embeddings = config["freeze_embeddings"]
"""
Embedding layer
-model holds an embedding for each layer in our vocab
-sets aside a special index in the embedding matrix for padding vector (of zeros)
-by default, embeddings are parameters (so gradients pass through them)
"""
self.embed_layer = nn.Embedding(vocab_size, self.embed_dim, padding_idx=pad_idx)
if self.freeze_embeddings:
self.embed_layer.weight.requires_grad = False
"""
Hidden layer
-we want to map embedded word windows of dim (window_size+1)*self.embed_dim to a hidden layer.
-nn.Sequential allows you to efficiently specify sequentially structured models
-first the linear transformation is evoked on the embedded word windows
-next the nonlinear transformation tanh is evoked.
"""
self.hidden_layer = nn.Sequential(nn.Linear(self.window_size*self.embed_dim,
self.hidden_dim),
nn.Tanh())
"""
Output layer
-we want to map elements of the output layer (of size self.hidden dim) to a number of classes.
"""
self.output_layer = nn.Linear(self.hidden_dim, self.num_classes)
"""
Softmax
-The final step of the softmax classifier: mapping final hidden layer to class scores.
-pytorch has both logsoftmax and softmax functions (and many others)
-since our loss is the negative LOG likelihood, we use logsoftmax
-technically you can take the softmax, and take the log but PyTorch's implementation
is optimized to avoid numerical underflow issues.
"""
self.log_softmax = nn.LogSoftmax(dim=2)
def forward(self, inputs):
"""
Let B:= batch_size
L:= window-padded sentence length
D:= self.embed_dim
S:= self.window_size
H:= self.hidden_dim
inputs: a (B, L) tensor of token indices
"""
B, L = inputs.size()
"""
Reshaping.
Takes in a (B, L) LongTensor
Outputs a (B, L~, S) LongTensor
"""
# Fist, get our word windows for each word in our input.
token_windows = inputs.unfold(1, self.window_size, 1)
_, adjusted_length, _ = token_windows.size()
# Good idea to do internal tensor-size sanity checks, at the least in comments!
assert token_windows.size() == (B, adjusted_length, self.window_size)
"""
Embedding.
Takes in a torch.LongTensor of size (B, L~, S)
Outputs a (B, L~, S, D) FloatTensor.
"""
embedded_windows = self.embed_layer(token_windows)
"""
Reshaping.
Takes in a (B, L~, S, D) FloatTensor.
Resizes it into a (B, L~, S*D) FloatTensor.
-1 argument "infers" what the last dimension should be based on leftover axes.
"""
embedded_windows = embedded_windows.view(B, adjusted_length, -1)
"""
Layer 1.
Takes in a (B, L~, S*D) FloatTensor.
Resizes it into a (B, L~, H) FloatTensor
"""
layer_1 = self.hidden_layer(embedded_windows)
"""
Layer 2
Takes in a (B, L~, H) FloatTensor.
Resizes it into a (B, L~, 2) FloatTensor.
"""
output = self.output_layer(layer_1)
"""
Softmax.
Takes in a (B, L~, 2) FloatTensor of unnormalized class scores.
Outputs a (B, L~, 2) FloatTensor of (log-)normalized class scores.
"""
output = self.log_softmax(output)
return output
###Output
_____no_output_____
###Markdown
Training.Now that we've got a model, we have to train it.
###Code
def loss_function(outputs, labels, lengths):
"""Computes negative LL loss on a batch of model predictions."""
B, L, num_classes = outputs.size()
num_elems = lengths.sum().float()
# get only the values with non-zero labels
loss = outputs*labels
# rescale average
return -loss.sum() / num_elems
def train_epoch(loss_function, optimizer, model, train_data):
## For each batch, we must reset the gradients
## stored by the model.
total_loss = 0
for batch, labels, lengths in train_data:
# clear gradients
optimizer.zero_grad()
# evoke model in training mode on batch
outputs = model.forward(batch)
# compute loss w.r.t batch
loss = loss_function(outputs, labels, lengths)
# pass gradients back, startiing on loss value
loss.backward()
# update parameters
optimizer.step()
total_loss += loss.item()
# return the total to keep track of how you did this time around
return total_loss
config = {"batch_size": 4,
"half_window": 2,
"embed_dim": 25,
"hidden_dim": 25,
"num_classes": 2,
"freeze_embeddings": False,
}
learning_rate = .0002
num_epochs = 10000
model = SoftmaxWordWindowClassifier(config, len(word_2_id))
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
train_loader = torch.utils.data.DataLoader(list(zip(train_sents, train_labels)),
batch_size=2,
shuffle=True,
collate_fn=partial(my_collate, window_size=2, word_2_id=word_2_id))
losses = []
for epoch in range(num_epochs):
epoch_loss = train_epoch(loss_function, optimizer, model, train_loader)
if epoch % 100 == 0:
losses.append(epoch_loss)
print(losses)
###Output
[1.4967301487922668, 1.408476173877716, 1.3443800806999207, 1.2865177989006042, 1.2272869944572449, 1.1691689491271973, 1.1141255497932434, 1.0696152448654175, 1.023829996585846, 0.978839099407196, 0.937132716178894, 0.8965558409690857, 0.8551942408084869, 0.8171629309654236, 0.7806291580200195, 0.7467736303806305, 0.7136902511119843, 0.6842415034770966, 0.6537061333656311, 0.6195352077484131, 0.5914349257946014, 0.5682767033576965, 0.5430445969104767, 0.5190333724021912, 0.49760693311691284, 0.47582894563674927, 0.45516568422317505, 0.4298042058944702, 0.41591694951057434, 0.39368535578250885, 0.3817802667617798, 0.36694473028182983, 0.35200121998786926, 0.3370656222105026, 0.31913231313228607, 0.3065541982650757, 0.2946578562259674, 0.28842414915561676, 0.27765345573425293, 0.26745346188545227, 0.25778329372406006, 0.24860621988773346, 0.23990143835544586, 0.22729042172431946, 0.22337404638528824, 0.21637336909770966, 0.20889568328857422, 0.20218300074338913, 0.19230441004037857, 0.19007354974746704, 0.18426819890737534, 0.17840557545423508, 0.173139289021492, 0.16499895602464676, 0.1602725237607956, 0.1590176522731781, 0.15144427865743637, 0.14732149988412857, 0.14641961455345154, 0.13959994912147522, 0.13598214834928513, 0.13251276314258575, 0.13197287172079086, 0.12871850654482841, 0.1253872662782669, 0.12239058315753937, 0.1171659529209137, 0.11695125326514244, 0.11428486183285713, 0.11171672493219376, 0.10924769192934036, 0.10686498507857323, 0.1045713983476162, 0.10218603909015656, 0.10022115334868431, 0.09602915123105049, 0.09616792947053909, 0.09424330666661263, 0.09223027899861336, 0.090587567538023, 0.08691023662686348, 0.08717184513807297, 0.08540527895092964, 0.0839710421860218, 0.08230703324079514, 0.0808291956782341, 0.07777531817555428, 0.0780084915459156, 0.07678597420454025, 0.07535399869084358, 0.07408255711197853, 0.07296567782759666, 0.07176320999860764, 0.07059716433286667, 0.0694643184542656, 0.06684627756476402, 0.06579622253775597, 0.06477534398436546, 0.06378135085105896, 0.06281331554055214]
###Markdown
Prediction.
###Code
test_loader = torch.utils.data.DataLoader(list(zip(test_sents, test_labels)),
batch_size=1,
shuffle=False,
collate_fn=partial(my_collate, window_size=2, word_2_id=word_2_id))
for test_instance, labs, _ in test_loader:
outputs = model.forward(test_instance)
print(torch.argmax(outputs, dim=2))
print(torch.argmax(labs, dim=2))
###Output
tensor([[0, 0, 0, 1]])
tensor([[0, 0, 0, 1]])
|
02_Keras_API.ipynb | ###Markdown
Keras API IntroductionThis tutorial is about the Keras API which is already highly developed with very good documentation - and the development continues. It seems likely that Keras will be the standard API for TensorFlow in the future so it is recommended that you use it instead of the other APIs. Flowchart There are two convolutional layers, each followed by a down-sampling using max-pooling (not shown in this flowchart). Then there are two fully-connected layers ending in a softmax-classifier. ![Flowchart](images/02_network_flowchart.png) Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import math
# from keras.backend.tensorflow_backend import set_session
# config = tf.ConfigProto()
# config.gpu_options.per_process_gpu_memory_fraction = 0.1
# set_session(tf.Session(config=config))
###Output
_____no_output_____
###Markdown
We need to import several things from Keras. Note the long import-statements. This might be a bug. Hopefully it will be possible to write shorter and more elegant lines in the future.
###Code
# from tf.keras.models import Sequential # This does not work!
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import InputLayer, Input
from tensorflow.python.keras.layers import Reshape, MaxPooling2D
from tensorflow.python.keras.layers import Conv2D, Dense, Flatten
###Output
_____no_output_____
###Markdown
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from mnist import MNIST
data = MNIST(data_dir="data/MNIST/")
###Output
_____no_output_____
###Markdown
The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(data.num_train))
print("- Validation-set:\t{}".format(data.num_val))
print("- Test-set:\t\t{}".format(data.num_test))
###Output
_____no_output_____
###Markdown
Copy some of the data-dimensions for convenience.
###Code
# The number of pixels in each dimension of an image.
img_size = data.img_size # 28
# The images are stored in one-dimensional arrays of this length.
img_size_flat = data.img_size_flat # 784
# Tuple with height and width of images used to reshape arrays.
img_shape = data.img_shape # (28, 28)
# This is used for reshaping in Keras.
img_shape_full = data.img_shape_full # (28, 28, 1)
# Number of classes, one class for each of 10 digits.
num_classes = data.num_classes # 10
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = data.num_channels # 1
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the images from the test-set.
images = data.x_test[0:9]
# Get the true classes for those images.
cls_true = data.y_test_cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
Helper-function to plot example errorsFunction for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Boolean array whether the predicted class is incorrect.
incorrect = (cls_pred != data.y_test_cls)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.x_test[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.y_test_cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
PrettyTensor APIThis is how the Convolutional Neural Network was implemented in Tutorial 03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below.
###Code
# if False:
# x_pretty = pt.wrap(x_image)
# with pt.defaults_scope(activation_fn=tf.nn.relu):
# y_pred, loss = x_pretty.\
# conv2d(kernel=5, depth=16, name='layer_conv1').\
# max_pool(kernel=2, stride=2).\
# conv2d(kernel=5, depth=36, name='layer_conv2').\
# max_pool(kernel=2, stride=2).\
# flatten().\
# fully_connected(size=128, name='layer_fc1').\
# softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Sequential ModelThe Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence.![Flowchart](images/02_network_flowchart.png)
###Code
# Start construction of the Keras Sequential model.
model = Sequential()
# Add an input layer which is similar to a feed_dict in TensorFlow.
# Note that the input-shape must be a tuple containing the image-size.
model.add(InputLayer(input_shape=(img_size_flat,)))
# The input is a flattened array with 784 elements,
# but the convolutional layers expect images with shape (28, 28, 1)
model.add(Reshape(img_shape_full))
# First convolutional layer with ReLU-activation and max-pooling.
model.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same',
activation='relu', name='layer_conv1'))
model.add(MaxPooling2D(pool_size=2, strides=2))
# Second convolutional layer with ReLU-activation and max-pooling.
model.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same',
activation='relu', name='layer_conv2'))
model.add(MaxPooling2D(pool_size=2, strides=2))
# Flatten the 4-rank output of the convolutional layers
# to 2-rank that can be input to a fully-connected / dense layer.
model.add(Flatten())
# First fully-connected / dense layer with ReLU-activation.
model.add(Dense(128, activation='relu'))
# Last fully-connected / dense layer with softmax-activation
# for use in classification.
model.add(Dense(num_classes, activation='softmax'))
model.summary()
###Output
_____no_output_____
###Markdown
Model CompilationThe Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras.We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate.
###Code
from tensorflow.python.keras.optimizers import Adam, Adagrad, Adadelta
# optimizer = Adam(lr=1e-3)
optimizer = Adagrad(lr=1e-3)
# optimizer = Adadelta(lr=1e-3)
###Output
_____no_output_____
###Markdown
For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called `categorical_crossentropy`. The performance metric we are interested in is the classification accuracy.
###Code
model.compile(optimizer=optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
TrainingNow that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times.
###Code
model.fit(x=data.x_train, y=data.y_train, validation_data = (data.x_val, data.y_val), epochs=1, batch_size=128)
###Output
_____no_output_____
###Markdown
EvaluationNow that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input.
###Code
result = model.evaluate(x=data.x_test,
y=data.y_test, verbose = 1)
###Output
_____no_output_____
###Markdown
We can print all the performance metrics for the test-set.
###Code
for name, value in zip(model.metrics_names, result):
print(name, value)
###Output
_____no_output_____
###Markdown
Or we can just print the classification accuracy.
###Code
print("{0}: {1:.2%}".format(model.metrics_names[1], result[1]))
###Output
_____no_output_____
###Markdown
PredictionWe can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead.
###Code
images = data.x_test[0:9] # load 9 images from the test-set.
###Output
_____no_output_____
###Markdown
These are the true class-number for those images. This is only used when plotting the images.
###Code
cls_true = data.y_test_cls[0:9]
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
Get the predicted classes as One-Hot encoded arrays.
###Code
y_pred = model.predict(x=images, verbose = 1)
###Output
_____no_output_____
###Markdown
Get the predicted classes as integers.
###Code
cls_pred = np.argmax(y_pred, axis=1)
plot_images(images=images,
cls_true=cls_true,
cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Examples of Mis-Classified ImagesWe can plot some examples of mis-classified images from the test-set.First we get the predicted classes for all the images in the test-set:
###Code
y_pred = model.predict(x=data.x_test)
###Output
_____no_output_____
###Markdown
Then we convert the predicted class-numbers from One-Hot encoded arrays to integers.
###Code
cls_pred = np.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Plot some of the mis-classified images.
###Code
plot_example_errors(cls_pred)
###Output
_____no_output_____
###Markdown
Functional ModelThe Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows.
###Code
# Create an input layer which is similar to a feed_dict in TensorFlow.
# Note that the input-shape must be a tuple containing the image-size.
inputs = Input(shape=(img_size_flat,))
# Variable used for building the Neural Network.
net = inputs
# The input is an image as a flattened array with 784 elements.
# But the convolutional layers expect images with shape (28, 28, 1)
net = Reshape(img_shape_full)(net)
# First convolutional layer with ReLU-activation and max-pooling.
net = Conv2D(kernel_size=5, strides=1, filters=16, padding='same',
activation='relu', name='layer_conv1')(net)
net = MaxPooling2D(pool_size=2, strides=2)(net)
# Second convolutional layer with ReLU-activation and max-pooling.
net = Conv2D(kernel_size=5, strides=1, filters=36, padding='same',
activation='relu', name='layer_conv2')(net)
net = MaxPooling2D(pool_size=2, strides=2)(net)
# Flatten the output of the conv-layer from 4-dim to 2-dim.
net = Flatten()(net)
# First fully-connected / dense layer with ReLU-activation.
net = Dense(128, activation='relu')(net)
# Last fully-connected / dense layer with softmax-activation
# so it can be used for classification.
net = Dense(num_classes, activation='softmax')(net)
# Output of the Neural Network.
outputs = net
###Output
_____no_output_____
###Markdown
Model CompilationWe have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training.
###Code
from tensorflow.python.keras.models import Model
###Output
_____no_output_____
###Markdown
Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above.
###Code
model2 = Model(inputs=inputs, outputs=outputs)
###Output
_____no_output_____
###Markdown
Compile the Keras model using the RMSprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here.
###Code
model2.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
TrainingThe model has now been defined and compiled so it can be trained using the same `fit()` function as used in the Sequential Model above. This also takes numpy-arrays as input.
###Code
model2.fit(x=data.x_train,
y=data.y_train,
epochs=1, batch_size=128)
###Output
_____no_output_____
###Markdown
EvaluationOnce the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model.
###Code
result = model2.evaluate(x=data.x_test,
y=data.y_test)
###Output
_____no_output_____
###Markdown
The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency.
###Code
for name, value in zip(model2.metrics_names, result):
print(name, value)
###Output
_____no_output_____
###Markdown
We can also print the classification accuracy as a percentage:
###Code
print("{0}: {1:.2%}".format(model2.metrics_names[1], result[1]))
###Output
_____no_output_____
###Markdown
Examples of Mis-Classified ImagesWe can plot some examples of mis-classified images from the test-set.First we get the predicted classes for all the images in the test-set:
###Code
y_pred = model2.predict(x=data.x_test)
###Output
_____no_output_____
###Markdown
Then we convert the predicted class-numbers from One-Hot encoded arrays to integers.
###Code
cls_pred = np.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Plot some of the mis-classified images.
###Code
plot_example_errors(cls_pred)
###Output
_____no_output_____
###Markdown
Save & Load ModelNOTE: You need to install `h5py` for this to work!Tutorial 04 was about saving and restoring the weights of a model using native TensorFlow code. It was an absolutely horrible API! Fortunately, Keras makes this very easy.This is the file-path where we want to save the Keras model.
###Code
path_model = 'model.keras'
###Output
_____no_output_____
###Markdown
Saving a Keras model with the trained weights is then just a single function call, as it should be.
###Code
model2.save(path_model)
###Output
_____no_output_____
###Markdown
Delete the model from memory so we are sure it is no longer used.
###Code
del model2
###Output
_____no_output_____
###Markdown
We need to import this Keras function for loading the model.
###Code
from tensorflow.python.keras.models import load_model
###Output
_____no_output_____
###Markdown
Loading the model is then just a single function-call, as it should be.
###Code
model3 = load_model(path_model)
###Output
_____no_output_____
###Markdown
We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers.
###Code
images = data.x_test[0:9]
cls_true = data.y_test_cls[0:9]
###Output
_____no_output_____
###Markdown
We then use the restored model to predict the class-numbers for those images.
###Code
y_pred = model3.predict(x=images)
###Output
_____no_output_____
###Markdown
Get the class-numbers as integers.
###Code
cls_pred = np.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Plot the images with their true and predicted class-numbers.
###Code
plot_images(images=images,
cls_pred=cls_pred,
cls_true=cls_true)
###Output
_____no_output_____
###Markdown
Visualization of Layer Weights and Outputs Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(weights)
w_max = np.max(weights)
# Number of filters used in the conv. layer.
num_filters = weights.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = weights[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Get LayersKeras has a simple way of listing the layers in the model.
###Code
model3.summary()
###Output
_____no_output_____
###Markdown
We count the indices to get the layers we want.The input-layer has index 0.
###Code
layer_input = model3.layers[0]
###Output
_____no_output_____
###Markdown
The first convolutional layer has index 2.
###Code
layer_conv1 = model3.layers[2]
layer_conv1
###Output
_____no_output_____
###Markdown
The second convolutional layer has index 4.
###Code
layer_conv2 = model3.layers[4]
###Output
_____no_output_____
###Markdown
Convolutional WeightsNow that we have the layers we can easily get their weights.
###Code
weights_conv1 = layer_conv1.get_weights()[0]
###Output
_____no_output_____
###Markdown
This gives us a 4-rank tensor.
###Code
weights_conv1.shape
###Output
_____no_output_____
###Markdown
Plot the weights using the helper-function from above.
###Code
plot_conv_weights(weights=weights_conv1, input_channel=0)
###Output
_____no_output_____
###Markdown
We can also get the weights for the second convolutional layer and plot them.
###Code
weights_conv2 = layer_conv2.get_weights()[0]
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
Helper-function for plotting the output of a convolutional layer
###Code
def plot_conv_output(values):
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Input ImageHelper-function for plotting a single image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = data.x_test[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Output of Convolutional Layer - Method 1There are different ways of getting the output of a layer in a Keras model. This method uses a so-called K-function which turns a part of the Keras model into a function.
###Code
from tensorflow.python.keras import backend as K
output_conv1 = K.function(inputs=[layer_input.input],
outputs=[layer_conv1.output])
###Output
_____no_output_____
###Markdown
We can then call this function with the input image. Note that the image is wrapped in two lists because the function expects an array of that dimensionality. Likewise, the function returns an array with one more dimensionality than we want so we just take the first element.
###Code
layer_output1 = output_conv1([[image1]])[0]
layer_output1.shape
###Output
_____no_output_____
###Markdown
We can then plot the output of all 16 channels of the convolutional layer.
###Code
plot_conv_output(values=layer_output1)
###Output
_____no_output_____
###Markdown
Output of Convolutional Layer - Method 2Keras also has another method for getting the output of a layer inside the model. This creates another Functional Model using the same input as the original model, but the output is now taken from the convolutional layer that we are interested in.
###Code
output_conv2 = Model(inputs=layer_input.input,
outputs=layer_conv2.output)
###Output
_____no_output_____
###Markdown
This creates a new model-object where we can call the typical Keras functions. To get the output of the convoloutional layer we call the `predict()` function with the input image.
###Code
layer_output2 = output_conv2.predict(np.array([image1]))
layer_output2.shape
###Output
_____no_output_____
###Markdown
We can then plot the images for all 36 channels.
###Code
plot_conv_output(values=layer_output2)
###Output
_____no_output_____ |
deployment_notebooks/quick_notebooks/quick_ACCESSaccount_deployment.ipynb | ###Markdown
This is to demonstrate how to use the `s1-enumerator` to get a full time series of GUNWs.We are going basically take each month in acceptable date range and increment by a month and make sure the temporal window is large enough to ensure connectivity across data gaps. ParametersThis is what the operator is going to have to change. Will provide some comments.
###Code
# toggle user-controlled parameters here
import datetime
import json
# product cutline
aoi_shapefile = '../../aois/CA_pathNumber115.geojson'
# load AO geojson and strip variables
with open(aoi_shapefile, 'r') as file:
data = json.load(file)
json_keys = data['features'][0]['properties'].keys()
# assign local variables from dict
locals().update(data['features'][0]['properties'])
# Override metadata keys
# Warning, toggle with caution
# Only to be used if you are testing and intentionally updating AO geojsons
update_AO = True
### Spatial coverage constraint parameter 'azimuth_mismatch'
# The merged SLC area over the AOI is allowed to be smaller by 'azimuth_mismatch' x swath width (i.e. 250km)
if 'azimuth_mismatch' not in json_keys or update_AO:
azimuth_mismatch = 5 # adjust as necessary
data['features'][0]['properties']['azimuth_mismatch'] = azimuth_mismatch
# Specify deployment URL
#deploy_url = 'https://hyp3-tibet.asf.alaska.edu' #for Tibet
deploy_url = 'https://hyp3-isce.asf.alaska.edu' #for access
data['features'][0]['properties']['deploy_url'] = deploy_url
# Number of nearest neighbors
if 'num_neighbors' not in json_keys or update_AO:
num_neighbors = 3 # adjust as necessary
data['features'][0]['properties']['num_neighbors'] = num_neighbors
#set temporal parameters
if 'min_days_backward' not in json_keys or update_AO:
min_days_backward = 0
data['features'][0]['properties']['min_days_backward'] = min_days_backward
today = datetime.datetime.now()
# Earliest year for reference frames
START_YEAR = 2014
# Latest year for reference frames
END_YEAR = today.year
YEARS_OF_INTEREST = list(range(START_YEAR,END_YEAR+1))
# Adjust depending on seasonality
# For annual IFGs, select a single months of interest and you will get what you want.
if 'month_range_lower' not in json_keys or update_AO:
month_range_lower = 1 # adjust as necessary
data['features'][0]['properties']['month_range_lower'] = month_range_lower
if 'month_range_upper' not in json_keys or update_AO:
month_range_upper = 12 # adjust as necessary
data['features'][0]['properties']['month_range_upper'] = month_range_upper
MONTHS_OF_INTEREST = list(range(month_range_lower,month_range_upper+1))
############################################################################################################
## OPTIONAL set temporal sampling parameters
# Temporal sampling invervals
if 'min_days_backward_timesubset' not in json_keys or update_AO:
# Specify as many temporal sampling intervals as desired (e.g. 90 (days), 180 (days) = semiannual, 365 (days) = annual, etc.)
min_days_backward_timesubset = []
if min_days_backward_timesubset != []:
data['features'][0]['properties']['min_days_backward_timesubset'] = ','.join(map(str, min_days_backward_timesubset))
if 'min_days_backward_timesubset' in json_keys:
min_days_backward_timesubset = [int(s) for s in data['features'][0]['properties']['min_days_backward_timesubset'].split(',')]
# apply temporal window to all temporal sampling intervals (hardcoded to 60 days)
temporal_window_days_timesubset = 60
min_days_backward_timesubset = [i - round(temporal_window_days_timesubset/2) for i in min_days_backward_timesubset]
if any(x<1 for x in min_days_backward_timesubset):
raise Exception("Your specified 'min_days_backward_timesubset' input is too small relative to"
"your specified 'temporal_window_days_timesubset' value. Adjust accordingly")
# Nearest neighbor sampling
if 'num_neighbors_timesubset' not in json_keys or update_AO:
# Specify corresponding nearest neighbor sampling for each temporal sampling interval (by default (n-)1)
num_neighbors_timesubset = []
if num_neighbors_timesubset != []:
data['features'][0]['properties']['num_neighbors_timesubset'] = ','.join(map(str, num_neighbors_timesubset))
if 'num_neighbors_timesubset' in json_keys:
num_neighbors_timesubset = [int(s) for s in data['features'][0]['properties']['num_neighbors_timesubset'].split(',')]
if len (num_neighbors_timesubset) != len(min_days_backward_timesubset):
raise Exception("Specified number of temporal sampling intervals DO NOT match specified nearest neighbor sampling")
############################################################################################################
# Define job-name
if 'job_name' not in json_keys or update_AO:
job_name = aoi_shapefile.split('/')[-1].split('.')[0].split('pathNumber')
job_name = ''.join(job_name)
job_name = job_name[-20:]
data['features'][0]['properties']['job_name'] = job_name
# product directory
prod_dir = job_name
# if operator variables do not exist, set and populate geojson with them
with open(aoi_shapefile, 'w') as file:
json.dump(data, file)
from s1_enumerator import get_aoi_dataframe, distill_all_pairs, enumerate_ifgs, get_s1_coverage_tiles, enumerate_ifgs_from_stack, get_s1_stack_by_dataframe
import concurrent
from rasterio.crs import CRS
from s1_enumerator import duplicate_gunw_found
from tqdm import tqdm
from shapely.geometry import Point, shape
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
from dateutil.relativedelta import relativedelta
import networkx as nx
import boto3
import hyp3_sdk
import copy
def shapefile_area(file_bbox,
bounds = False):
"""Compute km\u00b2 area of shapefile."""
# import dependencies
from pyproj import Proj
# loop through polygons
shape_area = 0
# pass single polygon as list
if file_bbox.type == 'Polygon': file_bbox = [file_bbox]
for polyobj in file_bbox:
#first check if empty
if polyobj.is_empty:
shape_area += 0
continue
# get coords
if bounds:
# Pass coordinates of bounds as opposed to cutline
# Necessary for estimating DEM/mask footprints
WSEN = polyobj.bounds
lon = np.array([WSEN[0],WSEN[0],WSEN[2],WSEN[2],WSEN[0]])
lat = np.array([WSEN[1],WSEN[3],WSEN[3],WSEN[1],WSEN[1]])
else:
lon, lat = polyobj.exterior.coords.xy
# use equal area projection centered on/bracketing AOI
pa = Proj("+proj=aea +lat_1={} +lat_2={} +lat_0={} +lon_0={}". \
format(min(lat), max(lat), (max(lat)+min(lat))/2, \
(max(lon)+min(lon))/2))
x, y = pa(lon, lat)
cop = {"type": "Polygon", "coordinates": [zip(x, y)]}
shape_area += shape(cop).area/1e6 # area in km^2
return shape_area
def continuous_time(product_df, iter_id='fileID'):
"""
Split the products into spatiotemporally continuous groups.
Split products by individual, continuous interferograms.
Input must be already sorted by pair and start-time to fit
the logic scheme below.
Using their time-tags, this function determines whether or not
successive products are in the same orbit.
If in the same orbit, the program determines whether or not they
overlap in time and are therefore spatially contiguous,
and rejects/reports cases for which there is no temporal overlap
and therefore a spatial gap.
"""
from shapely.ops import unary_union
# pass scenes that have no gaps
sorted_products = []
track_rejected_inds = []
pair_dict = {}
product_df_dict = product_df.to_dict('records')
# Check for (and remove) duplicate products
# If multiple pairs in list, cycle through
# and evaluate temporal connectivity.
for i in enumerate(product_df_dict[:-1]):
# Parse the first frame's metadata
scene_start = i[1]['startTime']
scene_end = i[1]['stopTime']
first_frame_ind = i[1]['ind_col']
first_frame = datetime.datetime.strptime( \
i[1]['fileID'][17:25], "%Y%m%d")
# Parse the second frame's metadata
new_scene_start = product_df_dict[i[0]+1]['startTime']
new_scene_end = product_df_dict[i[0]+1]['stopTime']
next_frame_ind = product_df_dict[i[0]+1]['ind_col']
next_frame = datetime.datetime.strptime( \
product_df_dict[i[0]+1]['fileID'][17:25], "%Y%m%d")
# Determine if next product in time is in same orbit AND overlaps
# AND corresponds to same scene
# If it is within same orbit cycle, try to append scene.
# This accounts for day change.
if abs(new_scene_end-scene_end) <= \
datetime.timedelta(minutes=100) \
and abs(next_frame-first_frame) <= \
datetime.timedelta(days=1):
# Don't export product if it is already tracked
# as a rejected scene
if first_frame_ind in track_rejected_inds or \
next_frame_ind in track_rejected_inds:
track_rejected_inds.append(first_frame_ind)
track_rejected_inds.append(next_frame_ind)
# Only pass scene if it temporally overlaps with reference scene
elif ((scene_end <= new_scene_start) and \
(new_scene_end <= scene_start)) or \
((scene_end >= new_scene_start) and \
(new_scene_end >= scene_start)):
# Check if dictionary for scene already exists,
# and if it does then append values
try:
dict_ind = sorted_products.index(next(item for item \
in sorted_products if i[1][iter_id] \
in item[iter_id]))
sorted_products[dict_ind] = {key: np.hstack([value] + \
[product_df_dict[i[0]+1][key]]).tolist() \
for key, value in sorted_products[dict_ind].items()}
# Match corresponding to scene NOT found,
# so initialize dictionary for new scene
except:
sorted_products.extend([dict(zip(i[1].keys(), \
[list(a) for a in zip(i[1].values(), \
product_df_dict[i[0]+1].values())]))])
# Else if scene doesn't overlap, this means there is a gap.
# Reject date from product list,
# and keep track of all failed dates
else:
track_rejected_inds.append(first_frame_ind)
track_rejected_inds.append(next_frame_ind)
# Products correspond to different dates,
# So pass both as separate scenes.
else:
# Check if dictionary for corresponding scene already exists.
if [item for item in sorted_products if i[1][iter_id] in \
item[iter_id]]==[] and i[1]['ind_col'] not in \
track_rejected_inds:
sorted_products.extend([dict(zip(i[1].keys(), \
[list(a) for a in zip(i[1].values())]))])
# Initiate new scene
if [item for item in sorted_products if \
product_df_dict[i[0]+1][iter_id] in item[iter_id]]==[] \
and next_frame_ind not in track_rejected_inds:
sorted_products.extend([dict(zip( \
product_df_dict[i[0]+1].keys(), \
[list(a) for a in \
zip(product_df_dict[i[0]+1].values())]))])
if first_frame_ind in track_rejected_inds:
track_rejected_inds.append(first_frame_ind)
if next_frame_ind in track_rejected_inds:
track_rejected_inds.append(next_frame_ind)
# Remove duplicate dates
track_rejected_inds = list(set(track_rejected_inds))
if len(track_rejected_inds) > 0:
print("{}/{} scenes rejected as stitched IFGs have gaps".format( \
len(track_rejected_inds), len(product_df)))
# Provide report of which files were kept vs. which were not.
print("Specifically, the following scenes were rejected:")
for item in product_df_dict:
if item['ind_col'] in track_rejected_inds:
print(item['fileID'])
else:
print("All {} scenes are spatially continuous.".format( \
len(sorted_products)))
# pass scenes that have no gaps
sorted_products = [item for item in sorted_products \
if not (any(x in track_rejected_inds for x in item['ind_col']))]
# Report dictionaries for all valid products
if sorted_products == []: #Check if pairs were successfully selected
raise Exception('No scenes meet spatial criteria '
'due to gaps and/or invalid input. '
'Nothing to export.')
# Combine polygons
for i in enumerate(sorted_products):
sorted_products[i[0]]['geometry'] = unary_union(i[1]['geometry'])
# combine and record scenes with gaps
track_kept_inds = pd.DataFrame(sorted_products)['ind_col'].to_list()
track_kept_inds = [item for sublist in track_kept_inds for item in sublist]
temp_gap_scenes_dict = [item for item in product_df_dict \
if not item['ind_col'] in track_kept_inds]
gap_scenes_dict = []
for i in enumerate(temp_gap_scenes_dict[:-1]):
# Parse the first frame's metadata
first_frame_ind = i[1]['ind_col']
first_frame = datetime.datetime.strptime( \
i[1]['fileID'][17:25], "%Y%m%d")
# Parse the second frame's metadata
next_frame_ind = temp_gap_scenes_dict[i[0]+1]['ind_col']
next_frame = datetime.datetime.strptime( \
temp_gap_scenes_dict[i[0]+1]['fileID'][17:25], "%Y%m%d")
# Determine if next product in time is in same orbit
# If it is within same orbit cycle, try to append scene.
# This accounts for day change.
if abs(next_frame-first_frame) <= \
datetime.timedelta(days=1):
# Check if dictionary for scene already exists,
# and if it does then append values
try:
dict_ind = gap_scenes_dict.index(next(item for item \
in gap_scenes_dict if i[1][iter_id] \
in item[iter_id]))
gap_scenes_dict[dict_ind] = {key: np.hstack([value] + \
[temp_gap_scenes_dict[i[0]+1][key]]).tolist() \
for key, value in gap_scenes_dict[dict_ind].items()}
# Match corresponding to scene NOT found,
# so initialize dictionary for new scene
except:
gap_scenes_dict.extend([dict(zip(i[1].keys(), \
[list(a) for a in zip(i[1].values(), \
temp_gap_scenes_dict[i[0]+1].values())]))])
# Products correspond to different dates,
# So pass both as separate scenes.
else:
# Check if dictionary for corresponding scene already exists.
if [item for item in gap_scenes_dict if i[1][iter_id] in \
item[iter_id]]==[]:
gap_scenes_dict.extend([dict(zip(i[1].keys(), \
[list(a) for a in zip(i[1].values())]))])
# Initiate new scene
if [item for item in gap_scenes_dict if \
temp_gap_scenes_dict[i[0]+1][iter_id] in item[iter_id]]==[]:
gap_scenes_dict.extend([dict(zip( \
temp_gap_scenes_dict[i[0]+1].keys(), \
[list(a) for a in \
zip(temp_gap_scenes_dict[i[0]+1].values())]))])
# there may be some extra missed pairs with gaps
if gap_scenes_dict != []:
extra_track_rejected_inds = pd.DataFrame(gap_scenes_dict)['ind_col'].to_list()
extra_track_rejected_inds = [item for sublist in extra_track_rejected_inds for item in sublist]
track_rejected_inds.extend(extra_track_rejected_inds)
return sorted_products, track_rejected_inds, gap_scenes_dict
def minimum_overlap_query(tiles,
aoi,
azimuth_mismatch=0.01,
iter_id='fileID'):
"""
Master function managing checks for SAR scene spatiotemporal contiguity
and filtering out scenes based off of user-defined spatial coverage threshold
"""
# initiate dataframe
tiles = tiles.sort_values(['startTime'])
updated_tiles = tiles.copy()
# Drop scenes that don't intersect with AOI at all
orig_len = updated_tiles.shape[0]
for index, row in tiles.iterrows():
intersection_area = aoi.intersection(row['geometry'])
overlap_area = shapefile_area(intersection_area)
aoi_area = shapefile_area(aoi)
percentage_coverage = (overlap_area/aoi_area)*100
if percentage_coverage == 0:
drop_ind = updated_tiles[updated_tiles['fileID'] == row['fileID']].index
updated_tiles = updated_tiles.drop(index=drop_ind)
updated_tiles = updated_tiles.reset_index(drop=True)
print("{}/{} scenes rejected for not intersecting with the AOI".format( \
orig_len-updated_tiles.shape[0], orig_len))
# group IFGs spatiotemporally
updated_tiles['ind_col'] = range(0, len(updated_tiles))
updated_tiles_dict, dropped_indices, gap_scenes_dict = continuous_time(updated_tiles, iter_id)
for i in dropped_indices:
drop_ind = updated_tiles.index[updated_tiles['ind_col'] == i]
updated_tiles.drop(drop_ind, inplace=True)
updated_tiles = updated_tiles.reset_index(drop=True)
# Kick out scenes that do not meet user-defined spatial threshold
aoi_area = shapefile_area(aoi)
orig_len = updated_tiles.shape[0]
track_rejected_inds = []
minimum_overlap_threshold = aoi_area - (250 * azimuth_mismatch)
print("")
print("AOI coverage: {}".format(aoi_area))
print("Allowable area of miscoverage: {}".format(250 * azimuth_mismatch))
print("minimum_overlap_threshold: {}".format(minimum_overlap_threshold))
print("")
if minimum_overlap_threshold < 0:
raise Exception('WARNING: user-defined mismatch of {}km\u00b2 too large relative to specified AOI'.format(azimuth_mismatch))
for i in enumerate(updated_tiles_dict):
intersection_area = aoi.intersection(i[1]['geometry'])
overlap_area = shapefile_area(intersection_area)
# Kick out scenes below specified overlap threshold
if minimum_overlap_threshold > overlap_area:
for iter_ind in enumerate(i[1]['ind_col']):
track_rejected_inds.append(iter_ind[1])
print("Rejected scene {} has only {}km\u00b2 overlap with AOI".format( \
i[1]['fileID'][iter_ind[0]], int(overlap_area)))
drop_ind = updated_tiles[updated_tiles['ind_col'] == iter_ind[1]].index
updated_tiles = updated_tiles.drop(index=drop_ind)
updated_tiles = updated_tiles.reset_index(drop=True)
print("{}/{} scenes rejected for not meeting defined spatial criteria".format( \
orig_len-updated_tiles.shape[0], orig_len))
# record rejected scenes separately
rejected_scenes_dict = [item for item in updated_tiles_dict \
if (any(x in track_rejected_inds for x in item['ind_col']))]
# pass scenes that are not tracked as rejected
updated_tiles_dict = [item for item in updated_tiles_dict \
if not (any(x in track_rejected_inds for x in item['ind_col']))]
return updated_tiles, pd.DataFrame(updated_tiles_dict), pd.DataFrame(gap_scenes_dict), pd.DataFrame(rejected_scenes_dict)
def pair_spatial_check(tiles,
aoi,
azimuth_mismatch=0.01,
iter_id='fileID'):
"""
Santity check function to confirm selected pairs meet user-defined spatial coverage threshold
"""
tiles['ind_col'] = range(0, len(tiles))
tiles = tiles.drop(columns=['reference', 'secondary'])
tiles_dict, dropped_pairs, gap_scenes_dict = continuous_time(tiles, iter_id='ind_col')
# Kick out scenes that do not meet user-defined spatial threshold
aoi_area = shapefile_area(aoi)
orig_len = tiles.shape[0]
track_rejected_inds = []
minimum_overlap_threshold = aoi_area - (250 * azimuth_mismatch)
if minimum_overlap_threshold < 0:
raise Exception('WARNING: user-defined mismatch of {}km\u00b2 too large relative to specified AOI'.format(azimuth_mismatch))
for i in enumerate(tiles_dict):
intersection_area = aoi.intersection(i[1]['geometry'])
overlap_area = shapefile_area(intersection_area)
# Kick out scenes below specified overlap threshold
if minimum_overlap_threshold > overlap_area:
for iter_ind in enumerate(i[1]['ind_col']):
track_rejected_inds.append(iter_ind[1])
print("Rejected pair {} has only {}km\u00b2 overlap with AOI {}ID {}Ind".format( \
i[1]['reference_date'][iter_ind[0]].replace('-', '') + '_' + \
i[1]['secondary_date'][iter_ind[0]].replace('-', ''), \
overlap_area, iter_ind[1], i[0]))
drop_ind = tiles[tiles['ind_col'] == iter_ind[1]].index
tiles = tiles.drop(index=drop_ind)
tiles = tiles.reset_index(drop=True)
print("{}/{} scenes rejected for not meeting defined spatial criteria".format( \
orig_len-tiles.shape[0], orig_len))
# record rejected scenes separately
rejected_scenes_dict = [item for item in tiles_dict \
if (any(x in track_rejected_inds for x in item['ind_col']))]
# pass scenes that are not tracked as rejected
tiles_dict = [item for item in tiles_dict \
if not (any(x in track_rejected_inds for x in item['ind_col']))]
return pd.DataFrame(tiles_dict), pd.DataFrame(gap_scenes_dict), pd.DataFrame(rejected_scenes_dict)
df_aoi = gpd.read_file(aoi_shapefile)
aoi = df_aoi.geometry.unary_union
aoi
###Output
_____no_output_____
###Markdown
Currently, there is a lot of data in each of the rows above. We really only need the AOI `geometry` and the `path_number`.
###Code
path_numbers = df_aoi.path_number.unique().tolist()
###Output
_____no_output_____
###Markdown
Generate a stackUsing all the tiles that are needed to cover the AOI we make a geometric query based on the frame. We now include only the path we are interested in.
###Code
path_dict = {}
path_dict['pathNumber'] = str(path_numbers[0])
aoi_geometry = pd.DataFrame([path_dict])
aoi_geometry = gpd.GeoDataFrame(aoi_geometry, geometry=[shape(aoi)], crs=CRS.from_epsg(4326))
aoi_geometry['pathNumber'] = aoi_geometry['pathNumber'].astype(int)
df_stack = get_s1_stack_by_dataframe(aoi_geometry,
path_numbers=path_numbers)
f'We have {df_stack.shape[0]} frames in our stack'
fig, ax = plt.subplots()
df_stack.plot(ax=ax, alpha=.5, color='green', label='Frames interesecting tile')
df_aoi.exterior.plot(color='black', ax=ax, label='AOI')
plt.legend()
###Output
_____no_output_____
###Markdown
Note, we now see the frames cover the entire AOI as we expect. First remove all scenes that do not produce spatiotemporally contiguous pairs and not meet specified intersection threshold
###Code
df_stack, df_stack_dict, gap_scenes_dict, rejected_scenes_dict = minimum_overlap_query(df_stack, aoi, azimuth_mismatch=azimuth_mismatch)
f'We have {df_stack.shape[0]} frames in our stack'
###Output
_____no_output_____
###Markdown
Plot acquisitions that aren't continuous (i.e. have gaps)
###Code
if not gap_scenes_dict.empty:
gap_scenes_dict = gap_scenes_dict.sort_values(by=['start_date'])
for index, row in gap_scenes_dict.iterrows():
fig, ax = plt.subplots()
p = gpd.GeoSeries(row['geometry'])
p.exterior.plot(color='black', ax=ax, label=row['start_date_str'][0])
df_aoi.exterior.plot(color='red', ax=ax, label='AOI')
plt.legend()
plt.show
###Output
_____no_output_____
###Markdown
Plot all mosaicked acquisitions that were rejected for not meeting user-specified spatial constraints
###Code
if not rejected_scenes_dict.empty:
rejected_scenes_dict = rejected_scenes_dict.sort_values(by=['start_date'])
fig, ax = plt.subplots()
for index, row in rejected_scenes_dict.iterrows():
p = gpd.GeoSeries(row['geometry'])
p.exterior.plot(color='black', ax=ax)
df_aoi.exterior.plot(color='red', ax=ax, label='AOI')
plt.legend()
###Output
_____no_output_____
###Markdown
Plot each individual mosaicked acquisitions that were rejected for not meeting user-specified spatial constraints
###Code
if not rejected_scenes_dict.empty:
for index, row in rejected_scenes_dict.iterrows():
fig, ax = plt.subplots()
p = gpd.GeoSeries(row['geometry'])
p.exterior.plot(color='black', ax=ax, label=row['start_date_str'][0])
df_aoi.exterior.plot(color='red', ax=ax, label='AOI')
plt.legend()
plt.show
###Output
_____no_output_____
###Markdown
Plot all mosaicked acquisitions that meet user-defined spatial coverage
###Code
fig, ax = plt.subplots()
for index, row in df_stack_dict.iterrows():
p = gpd.GeoSeries(row['geometry'])
p.exterior.plot(color='black', ax=ax)
df_aoi.exterior.plot(color='red', ax=ax, label='AOI')
plt.legend()
###Output
_____no_output_____
###Markdown
Next, we filter the stack by month to ensure we only have SLCs we need.
###Code
df_stack_month = df_stack[df_stack.start_date.dt.month.isin(MONTHS_OF_INTEREST)]
df_stack_month = df_stack_month[df_stack_month.start_date.dt.year.isin(YEARS_OF_INTEREST)]
###Output
_____no_output_____
###Markdown
We will create a list of ```min_reference_dates``` in descending order starting with the most recent date from the SLC stack ```df_stack_month``` as the start date.
###Code
min_reference_dates = sorted(df_stack_month['startTime'].to_list())
min_reference_dates = sorted(list(set([i.replace(hour=0, minute=0, second=0) for i in min_reference_dates])), reverse = True)
###Output
_____no_output_____
###Markdown
We can now enumerate the SLC pairs that will produce the interferograms (GUNWs) based on initially defined parameters that are exposed at the top-level of this jupyter notebook.
###Code
ifg_pairs = []
temporal_window_days = 365*3
# Avoid duplicate reference scenes (i.e. extra neighbors than intended)
track_ref_dates = []
for min_ref_date in tqdm(min_reference_dates):
temp = enumerate_ifgs_from_stack(df_stack_month,
aoi,
min_ref_date,
enumeration_type='tile', # options are 'tile' and 'path'. 'path' processes multiple references simultaneously
min_days_backward=min_days_backward,
num_neighbors_ref=1,
num_neighbors_sec=num_neighbors,
temporal_window_days=temporal_window_days,
min_tile_aoi_overlap_km2=.1,#Minimum reference tile overlap of AOI in km2
min_ref_tile_overlap_perc=.1,#Relative overlap of secondary frames over reference frame
minimum_ifg_area_km2=0.1,#The minimum overlap of reference and secondary in km2
minimum_path_intersection_km2=.1,#Overlap of common track union with respect to AOI in km2
)
if temp != []:
iter_key = temp[0]['reference']['start_date'].keys()[0]
iter_references_scenes = [temp[0]['reference']['start_date'][iter_key]]
if not any(x in iter_references_scenes for x in track_ref_dates):
track_ref_dates.extend(iter_references_scenes)
ifg_pairs += temp
###Output
_____no_output_____
###Markdown
OPTIONAL densify network with temporal sampling parameters
###Code
# Densify network with specified temporal sampling
# Avoid duplicate reference scenes (i.e. extra neighbors than intended)
if min_days_backward_timesubset != []:
for t_ind,t_interval in enumerate(min_days_backward_timesubset):
track_ref_dates = []
for min_ref_date in tqdm(min_reference_dates):
temp = enumerate_ifgs_from_stack(df_stack_month,
aoi,
min_ref_date,
enumeration_type='tile', # options are 'tile' and 'path'. 'path' processes multiple references simultaneously
min_days_backward=t_interval,
num_neighbors_ref=1,
num_neighbors_sec=num_neighbors_timesubset[t_ind],
temporal_window_days=temporal_window_days_timesubset,
min_tile_aoi_overlap_km2=.1,#Minimum reference tile overlap of AOI in km2
min_ref_tile_overlap_perc=.1,#Relative overlap of secondary frames over reference frame
minimum_ifg_area_km2=0.1,#The minimum overlap of reference and secondary in km2
minimum_path_intersection_km2=.1,#Overlap of common track union with respect to AOI in km2
)
if temp != []:
iter_key = temp[0]['reference']['start_date'].keys()[0]
iter_references_scenes = [temp[0]['reference']['start_date'][iter_key]]
if not any(x in iter_references_scenes for x in track_ref_dates):
track_ref_dates.extend(iter_references_scenes)
ifg_pairs += temp
f'The number of GUNWs (likely lots of duplicates) is {len(ifg_pairs)}'
###Output
_____no_output_____
###Markdown
Get Dataframe
###Code
df_pairs = distill_all_pairs(ifg_pairs)
f"# of GUNWs: ' {df_pairs.shape[0]}"
###Output
_____no_output_____
###Markdown
Deduplication Pt. 1A `GUNW` is uniquely determined by the reference and secondary IDs. We contanenate these sorted lists and generate a lossy hash to deduplicate products we may have introduced from the enumeration above.
###Code
import hashlib
import json
def get_gunw_hash_id(reference_ids: list, secondary_ids: list) -> str:
all_ids = json.dumps([' '.join(sorted(reference_ids)),
' '.join(sorted(secondary_ids))
]).encode('utf8')
hash_id = hashlib.md5(all_ids).hexdigest()
return hash_id
def hasher(row):
return get_gunw_hash_id(row['reference'], row['secondary'])
df_pairs['hash_id'] = df_pairs.apply(hasher, axis=1)
f"# of duplicated entries: {df_pairs.duplicated(subset=['hash_id']).sum()}"
df_pairs = df_pairs.drop_duplicates(subset=['hash_id']).reset_index(drop=True)
f"# of UNIQUE GUNWs: {df_pairs.shape[0]}"
###Output
_____no_output_____
###Markdown
Update types for Graphical AnalysisWe want to do some basic visualization to support the understanding if we traverse time correctly. We do some simple standard pandas manipulation.
###Code
df_pairs['reference_date'] = pd.to_datetime(df_pairs['reference_date'])
df_pairs['secondary_date'] = pd.to_datetime(df_pairs['secondary_date'])
df_pairs.head()
###Output
_____no_output_____
###Markdown
Visualize a Date Graph from Time SeriesWe can put this into a network Directed Graph and use some simple network functions to check connectivity.We are going to use just dates for nodes, though you could use `(ref_date, hash_id)` for nodes and then inspect connected components. That is for another notebook.
###Code
# Get unique dates
unique_dates = df_pairs.reference_date.tolist() + df_pairs.secondary_date.tolist()
unique_dates = sorted(list(set(unique_dates)))
# initiate and plot date notes
date2node = {date: k for (k, date) in enumerate(unique_dates)}
node2date = {k: date for (date, k) in date2node.items()}
%matplotlib widget
G = nx.DiGraph()
edges = [(date2node[ref_date], date2node[sec_date])
for (ref_date, sec_date) in zip(df_pairs.reference_date, df_pairs.secondary_date)]
G.add_edges_from(edges)
nx.draw(G)
###Output
_____no_output_____
###Markdown
This function checks there is a path from the first date to the last one. The y-axis is created purely for display so doesn't really indicated anything but flow by month.
###Code
nx.has_path(G,
target=date2node[unique_dates[0]],
source=date2node[unique_dates[-1]])
###Output
_____no_output_____
###Markdown
Ensure that the result above returns a ```True``` value to be able to produce a time-series.
###Code
fig, ax = plt.subplots(figsize=(15, 5))
increment = [date.month + date.day for date in unique_dates]
# source: https://stackoverflow.com/a/27852570
scat = ax.scatter(unique_dates, increment)
position = scat.get_offsets().data
pos = {date2node[date]: position[k] for (k, date) in enumerate(unique_dates)}
nx.draw_networkx_edges(G, pos=pos, ax=ax)
ax.grid('on')
ax.tick_params(axis='x',
which='major',
labelbottom=True,
labelleft=True)
ymin, ymax = ax.get_ylim()
###Output
_____no_output_____
###Markdown
Deduplication Pt. 2This is to ensure that previous processing hasn't generate any of the products we have just enumerated. Check CMRThis function checks the ASF DAAC if there are GUNWs with the same spatial extent and same date pairs as the ones created. At some point, we will be able to check the input SLC ids from CMR, but currently that is not possible.If you are processing a new AOI whose products have not been delivered, you can ignore this step. It is a bit time consuming as the queries are done product by product.
###Code
import json
import xml.etree.ElementTree as ET
import requests
COLLECTION_CONCEPT_ID = 'C1595422627-ASF'
CMR_URL = 'https://cmr.earthdata.nasa.gov/search/granules.echo10'
def parse_echo10(echo10_xml: str):
granules = []
root = ET.fromstring(echo10_xml)
for granule in root.findall('result/Granule'):
g = {
'product_id': granule.find('GranuleUR').text,
'product_version': granule.find('GranuleUR').text.split('-')[-1],
'reference_scenes': [],
'secondary_scenes': []
}
for input_granule in granule.findall('InputGranules/InputGranule'):
input_granule_type, input_granule_name = input_granule.text.split(' ')
if input_granule_type == '[Reference]':
g['reference_scenes'].append(input_granule_name)
else:
g['secondary_scenes'].append(input_granule_name)
granules.append(g)
return granules
def get_cmr_products(path: int = None):
session = requests.Session()
search_params = {
'provider': 'ASF',
'collection_concept_id': COLLECTION_CONCEPT_ID,
'page_size': 2000,
}
if path is not None:
search_params['attribute[]'] = f'int,PATH_NUMBER,{path}'
headers = {}
products = []
while True:
response = session.get(CMR_URL, params=search_params, headers=headers)
response.raise_for_status()
parsed_results = parse_echo10(response.text)
products.extend(parsed_results)
if 'CMR-Search-After' not in response.headers:
break
headers = {'CMR-Search-After': response.headers['CMR-Search-After']}
return products
# query CMR for all existing products in path
results = []
results = get_cmr_products(path_numbers[0])
# sort with descending product version numbers
results = sorted(results, key=lambda d: d['product_version'], reverse = True)
# convert CMR results to dataframe with the latest product version
new_results = []
track_scenes = []
for i in results:
ifg_append = i['reference_scenes'] + i['secondary_scenes']
# pass only first instance of scene combo
if ifg_append not in track_scenes:
track_scenes.append(ifg_append)
new_results.append(i)
results = pd.DataFrame(new_results)
# update column names for merging
results.rename(columns={"reference_scenes": "reference", "secondary_scenes": "secondary"}, inplace = True)
###Output
_____no_output_____
###Markdown
Capture products in CMR
###Code
def capture_cmr_products(row, cmr_products):
'''Capture products that exist in CMR based on reference and secondary scenes'''
# concatenate reference and secondary scene in each dataframe row
row_scenes = row['reference'] + row['secondary']
# flag True if scenes from enumerator results are within CMR results
if any(set(row_scenes).issubset(set(x)) for x in cmr_products):
product_on_cmr = True
else:
product_on_cmr = np.nan
return product_on_cmr
# Finally filter out pairs in CMR
try:
# parse all reference and secondary products for each corresponding product in CMR
cmr_products = results['reference'] + results['secondary']
cmr_products = cmr_products.to_list()
# determine which products in the enumerator already exist in CMR
df_pairs['product_id'] = df_pairs.apply(lambda r: capture_cmr_products(r, cmr_products), axis=1)
# filter out pairs in CMR
total_existing_gunws = len(df_pairs[df_pairs['product_id'].notna()])
print('existing_gunws: ', total_existing_gunws)
print('Total pairs', df_pairs.shape[0])
df_pairs_filtered = df_pairs[~df_pairs['product_id'].notna()].reset_index(drop=True)
df_pairs_filtered.drop_duplicates(subset=['hash_id'], inplace=True)
print('after filtering, total pairs: ', df_pairs_filtered.shape[0])
except KeyError:
df_pairs_filtered = copy.deepcopy(df_pairs)
df_pairs_filtered.drop_duplicates(subset=['hash_id'], inplace=True)
print('after filtering, total pairs: ', df_pairs_filtered.shape[0])
if len(df_pairs_filtered) == 0:
raise Exception('All queried pairs are in CMR, there is nothing to process with specified parameters.')
###Output
_____no_output_____
###Markdown
Check Hyp3 AccountWe are now going to check1. check products in the open s3 bucket2. check running/pending jobsNotes:1. Above, to accomplish step 1., there is some verbose code (see below). Once we automate delivery, this step will be obsolete. However, until we have delivery, we have to make sure that there are no existing products. Additionally, if we are using a separate (non-operational account), then would be good to use this.2. If we are debugging products and some of our previously generated products were made incorrectly, we will want to ignore this step.
###Code
# uses .netrc; add `prompt=True` to prompt for credentials;
hyp3_isce = hyp3_sdk.HyP3(deploy_url)
pending_jobs = hyp3_isce.find_jobs(status_code='PENDING') + hyp3_isce.find_jobs(status_code='RUNNING')
all_jobs = hyp3_isce.find_jobs()
print(all_jobs)
###Output
_____no_output_____
###Markdown
1. Get existing products in s3 bucket Get bucket (there is only one)
###Code
job_data = [j.to_dict() for j in all_jobs]
job_data_s3 = list(filter(lambda job: 'files' in job.keys(), job_data))
bucket = job_data_s3[0]['files'][0]['s3']['bucket']
###Output
_____no_output_____
###Markdown
Get all keys
###Code
job_keys = [job['files'][0]['s3']['key'] for job in job_data_s3]
from botocore import UNSIGNED
from botocore.config import Config
s3 = boto3.resource('s3',config=Config(signature_version=UNSIGNED))
prod_bucket = s3.Bucket(bucket)
objects = list(prod_bucket.objects.all())
ncs = list(filter(lambda x: x.key.endswith('.nc'), objects))
###Output
_____no_output_____
###Markdown
Need to physically check if the products are not there (could have been deleted!)
###Code
nc_keys = [nc_ob.key for nc_ob in ncs]
jobs_with_prods_in_s3 = [job for (k, job) in enumerate(job_data_s3) if job_keys[k] in nc_keys]
slcs = [(job['job_parameters']['granules'],
job['job_parameters']['secondary_granules'])
for job in jobs_with_prods_in_s3]
hash_ids_of_prods_in_s3 = [get_gunw_hash_id(*slc) for slc in slcs]
f"We are removing {df_pairs_filtered['hash_id'].isin(hash_ids_of_prods_in_s3).sum()} GUNWs for submission"
items = hash_ids_of_prods_in_s3
df_pairs_filtered = df_pairs_filtered[~df_pairs_filtered['hash_id'].isin(items)].reset_index(drop=True)
f"Current # of GUNWs: {df_pairs_filtered.shape[0]}"
###Output
_____no_output_____
###Markdown
2. Running or Pending Jobs
###Code
pending_job_data = [j.to_dict() for j in pending_jobs]
pending_slcs = [(job['job_parameters']['granules'],
job['job_parameters']['secondary_granules'])
for job in pending_job_data]
hash_ids_of_pending_jobs = [get_gunw_hash_id(*slc) for slc in pending_slcs]
items = hash_ids_of_pending_jobs
f"We are removing {df_pairs_filtered['hash_id'].isin(items).sum()} GUNWs for submission"
items = hash_ids_of_pending_jobs
df_pairs_filtered = df_pairs_filtered[~df_pairs_filtered['hash_id'].isin(items)].reset_index(drop=True)
f"Current # of GUNWs: {df_pairs_filtered.shape[0]}"
###Output
_____no_output_____
###Markdown
Visualize a Date Graph from the Final Filtered Time SeriesWe can put this into a network Directed Graph and use some simple network functions to check connectivity (*which may not be applicable).We are going to use just dates for nodes, though you could use `(ref_date, hash_id)` for nodes and then inspect connected components. That is for another notebook.
###Code
# Get unique dates
unique_dates = df_pairs_filtered.reference_date.tolist() + df_pairs_filtered.secondary_date.tolist()
unique_dates = sorted(list(set(unique_dates)))
# initiate and plot date notes
date2node = {date: k for (k, date) in enumerate(unique_dates)}
node2date = {k: date for (date, k) in date2node.items()}
%matplotlib widget
G = nx.DiGraph()
edges = [(date2node[ref_date], date2node[sec_date])
for (ref_date, sec_date) in zip(df_pairs_filtered.reference_date, df_pairs_filtered.secondary_date)]
G.add_edges_from(edges)
nx.draw(G)
###Output
_____no_output_____
###Markdown
This function checks there is a path from the first date to the last one. The y-axis is created purely for display so doesn't really indicated anything but flow by month. *Again, this may not be applicable in cases where parts of the network had already been deployed before and/or you are densifying by specifying temporal sampling.In such cases, these plots serve merely as a sanity check.
###Code
nx.has_path(G,
target=date2node[unique_dates[0]],
source=date2node[unique_dates[-1]])
###Output
_____no_output_____
###Markdown
Ensure that the result above returns a ```True``` value to be able to produce a time-series.
###Code
fig, ax = plt.subplots(figsize=(15, 5))
increment = [date.month + date.day for date in unique_dates]
# source: https://stackoverflow.com/a/27852570
scat = ax.scatter(unique_dates, increment)
position = scat.get_offsets().data
pos = {date2node[date]: position[k] for (k, date) in enumerate(unique_dates)}
nx.draw_networkx_edges(G, pos=pos, ax=ax)
ax.grid('on')
ax.tick_params(axis='x',
which='major',
labelbottom=True,
labelleft=True)
ymin, ymax = ax.get_ylim()
###Output
_____no_output_____
###Markdown
Submit jobs to Hyp3
###Code
records_to_submit = df_pairs_filtered.to_dict('records')
records_to_submit[0]
###Output
_____no_output_____
###Markdown
The below puts the records in a format that we can submit to the Hyp3 API.**Note 1**: there is an index in the records to submit to ensure we don't over submit jobs for generating GUNWs. \**Note 2**: uncomment the code to *actually* submit the jobs.
###Code
# uses .netrc; add `prompt=True` to prompt for credentials;
hyp3_isce = hyp3_sdk.HyP3(deploy_url)
# NOTE: we are using "INSAR_ISCE" for the `main` branch.
# Change this to "INSAR_ISCE_TEST" to use the `dev` branch, but ONLY if you know what you're doing
# chaging to dev will overwrite the product version number and make it difficult to dedup
job_type = 'INSAR_ISCE'
job_dicts = [{'name': job_name,
'job_type': job_type,
'job_parameters': {'granules': r['reference'],
'secondary_granules': r['secondary']}}
# NOTE THERE IS AN INDEX - this is to submit only a subset of Jobs
for r in records_to_submit]
# Report summary of all job parameters
print("Start date is '{}'".format(unique_dates[0]))
print("End date is '{}'".format(unique_dates[-1]))
print("GUNWs expected '{}'".format(len(job_dicts)))
print("Job Name is '{}'".format(job_name))
print("Shapefile Name is '{}'".format(aoi_shapefile.split('/')[-1]))
#UNCOMMENT TO SUBMIT
#prepared_jobs = job_dicts
#submitted_jobs = hyp3_sdk.Batch()
#for batch in hyp3_sdk.util.chunk(prepared_jobs):
# submitted_jobs += hyp3_isce.submit_prepared_jobs(batch)
###Output
_____no_output_____
###Markdown
Query all jobs on the server
###Code
jobs = hyp3_isce.find_jobs()
print(jobs)
###Output
_____no_output_____
###Markdown
Query your particular job
###Code
jobs = hyp3_isce.find_jobs(name=job_name)
print(jobs)
# # create clean directory to deposit products in
if os.path.exists(prod_dir):
os.remove(prod_dir)
os.mkdir(prod_dir)
###Output
_____no_output_____
###Markdown
Below, we show how to download files. The multi-threading example will download products in parallel much faster than `jobs.download_files()`.
###Code
jobs = hyp3_isce.find_jobs(name=job_name)
print(jobs)
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
results = list(tqdm(executor.map(lambda job: job.download_files(), jobs), total=len(jobs)))
###Output
_____no_output_____ |
source-code/keras/imdb_rnn_no_gpu.ipynb | ###Markdown
IMDB: recursive neural networks Data preprocessing Required imports
###Code
from keras.datasets import imdb
from keras.preprocessing import sequence
import numpy as np
from sklearn.model_selection import train_test_split
###Output
Using TensorFlow backend.
###Markdown
Processing Load the training and test data. To limit computation time, we restrict the number of words to 5,000.
###Code
num_words = 5_000
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=num_words)
###Output
_____no_output_____
###Markdown
Since the review vary in length, and we prefer to limit the computation time, we will base the classification on the first 100 features of each input sequence.
###Code
feature_length = 100
x_train = sequence.pad_sequences(x_train, maxlen=feature_length)
x_test = sequence.pad_sequences(x_test, maxlen=feature_length)
###Output
_____no_output_____
###Markdown
Now the training and test input are 2D arrays. We split the training set into a subset for actual training, and one for validation. First we seed the random number generator to ensure reproducibility. In this case, we will use part of the 25000 test examples as valiation data.
###Code
np.random.seed(1234)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train)
###Output
_____no_output_____
###Markdown
GRU Required imports & model definition
###Code
from keras.layers import Activation, Dense, Dropout
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import GRU
from keras.models import Sequential
from keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
Again, to limit training times, we restrict ourselfs to using a limited number of features.
###Code
vector_length = 64
num_units = 64
model = Sequential()
model.add(Embedding(num_words, vector_length, mask_zero=True,
input_length=feature_length))
model.add(GRU(num_units))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer=Adam(),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Training
###Code
history = model.fit(x_train, y_train, batch_size=64, epochs=10,
validation_data=(x_val, y_val))
###Output
Train on 18750 samples, validate on 6250 samples
Epoch 1/10
18750/18750 [==============================] - 55s 3ms/step - loss: 0.4886 - acc: 0.7483 - val_loss: 0.3761 - val_acc: 0.8370
Epoch 2/10
18750/18750 [==============================] - 54s 3ms/step - loss: 0.3100 - acc: 0.8725 - val_loss: 0.3487 - val_acc: 0.8461
Epoch 3/10
18750/18750 [==============================] - 54s 3ms/step - loss: 0.2667 - acc: 0.8947 - val_loss: 0.3573 - val_acc: 0.8414
Epoch 4/10
18750/18750 [==============================] - 53s 3ms/step - loss: 0.2284 - acc: 0.9124 - val_loss: 0.3970 - val_acc: 0.8411
Epoch 5/10
18750/18750 [==============================] - 54s 3ms/step - loss: 0.1901 - acc: 0.9294 - val_loss: 0.4075 - val_acc: 0.8430
Epoch 6/10
18750/18750 [==============================] - 55s 3ms/step - loss: 0.1564 - acc: 0.9417 - val_loss: 0.4367 - val_acc: 0.8413
Epoch 7/10
18750/18750 [==============================] - 53s 3ms/step - loss: 0.1195 - acc: 0.9580 - val_loss: 0.4918 - val_acc: 0.8307
Epoch 8/10
18750/18750 [==============================] - 54s 3ms/step - loss: 0.0988 - acc: 0.9659 - val_loss: 0.5774 - val_acc: 0.8342
Epoch 9/10
18750/18750 [==============================] - 54s 3ms/step - loss: 0.0837 - acc: 0.9715 - val_loss: 0.6328 - val_acc: 0.8304
Epoch 10/10
18750/18750 [==============================] - 53s 3ms/step - loss: 0.0673 - acc: 0.9786 - val_loss: 0.6501 - val_acc: 0.8144
###Markdown
The training accuracy is much better than the validation accurcy, so the model is likely heavily overtrained. Testing
###Code
model.evaluate(x_test, y_test)
###Output
25000/25000 [==============================] - 14s 578us/step
###Markdown
LSTM Required imports & model definition
###Code
from keras.layers.recurrent import LSTM
###Output
_____no_output_____
###Markdown
Again, to limit training times, we restrict ourselfs to using a limited number of features.
###Code
vector_length = 64
num_units = 64
model = Sequential()
model.add(Embedding(num_words, vector_length, mask_zero=True,
input_length=feature_length))
model.add(LSTM(num_units))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer=Adam(),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Training
###Code
history = model.fit(x_train, y_train, batch_size=64, epochs=10,
validation_data=(x_val, y_val))
###Output
Train on 18750 samples, validate on 6250 samples
Epoch 1/10
18750/18750 [==============================] - 77s 4ms/step - loss: 0.4706 - acc: 0.7644 - val_loss: 0.3472 - val_acc: 0.8518
Epoch 2/10
18750/18750 [==============================] - 75s 4ms/step - loss: 0.3061 - acc: 0.8759 - val_loss: 0.3585 - val_acc: 0.8365
Epoch 3/10
18750/18750 [==============================] - 74s 4ms/step - loss: 0.2570 - acc: 0.8951 - val_loss: 0.3720 - val_acc: 0.8427
Epoch 4/10
18750/18750 [==============================] - 74s 4ms/step - loss: 0.2221 - acc: 0.9147 - val_loss: 0.4102 - val_acc: 0.8389
Epoch 5/10
18750/18750 [==============================] - 75s 4ms/step - loss: 0.1798 - acc: 0.9315 - val_loss: 0.4257 - val_acc: 0.8352
Epoch 6/10
18750/18750 [==============================] - 74s 4ms/step - loss: 0.1550 - acc: 0.9430 - val_loss: 0.4374 - val_acc: 0.8299
Epoch 7/10
18750/18750 [==============================] - 74s 4ms/step - loss: 0.1348 - acc: 0.9506 - val_loss: 0.5027 - val_acc: 0.8222
Epoch 8/10
18750/18750 [==============================] - 73s 4ms/step - loss: 0.1286 - acc: 0.9540 - val_loss: 0.4751 - val_acc: 0.8222
Epoch 9/10
18750/18750 [==============================] - 75s 4ms/step - loss: 0.0994 - acc: 0.9643 - val_loss: 0.5787 - val_acc: 0.8176
Epoch 10/10
18750/18750 [==============================] - 73s 4ms/step - loss: 0.0884 - acc: 0.9695 - val_loss: 0.6713 - val_acc: 0.8282
###Markdown
The training accuracy is much better than the validation accurcy, so the model is likely heavily overtrained. Testing
###Code
model.evaluate(x_test, y_test)
###Output
25000/25000 [==============================] - 16s 639us/step
|
week0_04_svm_and_pca/week0_04_pictures_svd__completed.ipynb | ###Markdown
Pictures compression using SVDIn this exercise you are supposed to study how SVD could be used in image compression._Based on open course in [Numerical Linear Algebra](https://github.com/oseledets/nla2018) by Ivan Oseledets_
###Code
# If you are using colab, uncomment this cell
# ! wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/a5bf18c/datasets/waiting.jpeg
# ! wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/a5bf18c/datasets/mipt.jpg
# ! wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/a5bf18c/datasets/simpsons.jpg
# ! mkdir ../dataset
# ! mv -t ../dataset waiting.jpeg mipt.jpg simpsons.jpg
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
###Output
_____no_output_____
###Markdown
1. Singular valuesCompute the singular values of some predownloaded image (via the code provided below) and plot them. Do not forget to use logarithmic scale.
###Code
face_raw = Image.open("../dataset/waiting.jpeg")
face = np.array(face_raw).astype(np.uint8)
plt.imshow(face_raw)
plt.xticks(())
plt.yticks(())
plt.title("Original Picture")
plt.show()
# optional: zero mean the image
face = face - np.mean(face, axis=1, keepdims=True)
# Image is saved as a 3-dimensional array with shape H x W x C (heigt x width x channels)
Rf = face[:,:,0]
Gf = face[:,:,1]
Bf = face[:,:,2]
# Compute SVD and plot the singular values for different image channels
u, Rs, vh = np.linalg.svd(Rf, full_matrices=False)
u, Gs, vh = np.linalg.svd(Gf, full_matrices=False)
u, Bs, vh = np.linalg.svd(Bf, full_matrices=False)
plt.figure(figsize=(12,7))
plt.plot(Rs,'ro')
plt.plot(Gs,'g.')
plt.plot(Bs,'b:')
plt.yscale('log')
plt.ylabel("Singular values")
plt.xlabel("Singular value order")
plt.show()
###Output
_____no_output_____
###Markdown
2. CompressComplete a function ```compress```, that performs SVD and truncates it (using $k$ singular values/vectors). See the prototype below. Note, that in case when your images are not grayscale you have to split your image to channels and work with matrices corresponding to different channels separately.Plot approximate reconstructed image $M_\varepsilon$ of your favorite image such that $rank(M_\varepsilon) = 5, 20, 50$ using ```plt.subplots```.
###Code
def compress(image, k):
"""
Perform svd decomposition and truncate it (using k singular values/vectors)
Parameters:
image (np.array): input image (probably, colourful)
k (int): approximation rank
--------
Returns:
reconst_matrix (np.array): reconstructed matrix (tensor in colourful case)
s (np.array): array of singular values
"""
image2 = image.copy()
Rf = image2[:,:,0]# - image2[:,:,0].mean(axis=1, keepdims=True)
Gf = image2[:,:,1]# - image2[:,:,1].mean(axis=1, keepdims=True)
Bf = image2[:,:,2]# - image2[:,:,2].mean(axis=1, keepdims=True)
# compute per-channel SVD for input image
# <your code here>
u_r, Rs, vh_r = np.linalg.svd(Rf, full_matrices=False)
u_g, Gs, vh_g = np.linalg.svd(Gf, full_matrices=False)
u_b, Bs, vh_b = np.linalg.svd(Bf, full_matrices=False)
Rs = Rs[:k]
Gs = Gs[:k]
Bs = Bs[:k]
# reconstruct the input image with the given approximation rank
reduced_im = np.zeros((image.shape),np.uint8)
# <your code here>
red_channel = u_r[:, :k] @ np.diag(Rs) @ vh_r[:k, :]
green_channel = u_g[:, :k] @ np.diag(Gs) @ vh_g[:k, :]
blue_channel = u_b[:, :k] @ np.diag(Bs) @ vh_b[:k, :]
reduced_im[..., 0] = red_channel
reduced_im[..., 1] = green_channel
reduced_im[..., 2] = blue_channel
# save the array of top-k singular values
s = np.zeros((len(Gs), 3))
# <your code here>
s[:, 0] = Rs
s[:, 1] = Gs
s[:, 2] = Bs
return reduced_im.copy(), s
plt.figure(figsize=(18,12))
for i,k in enumerate([350,300,250,200,150,100,50,20,1]):
plt.subplot(3,3,i+1)
im,s = compress(face,k)
plt.imshow(Image.fromarray(im,"RGB"))
plt.xticks(())
plt.yticks(())
plt.title("{} greatest SV".format(k))
###Output
_____no_output_____
###Markdown
3. DiscoverPlot the following two figures for your favorite picture* How relative error of approximation depends on the rank of approximation?* How compression rate in terms of storing information ((singular vectors + singular numbers) / total size of image) depends on the rank of approximation?
###Code
img, s = compress(face, k)
# fancy progress bar
from tqdm.auto import tqdm
k_list = range(5, face.shape[1], 1)
rel_err = []
info = []
for k in tqdm(k_list, leave=False):
img, s = compress(face, k)
current_relative_error = np.linalg.norm(img.astype(np.float64) - face.astype(np.float64))# MSE(img, face) / l2_norm(face)
current_relative_error /= np.linalg.norm(face.astype(np.float64))
current_information = k * (385 + 498 + 1) # U(image_height x K) @ S(diag KxK) @ V^T(K x image_width)
rel_err.append(current_relative_error)
info.append(current_information)
plt.figure(figsize=(12,7))
plt.subplot(2,1,1)
plt.title("Memory volume plot")
plt.xlabel("Rank")
plt.ylabel("Bytes")
plt.plot(k_list, info)
plt.subplot(2,1,2)
plt.title("Relative error plot")
plt.xlabel("Rank")
plt.ylabel("Rel err value")
plt.plot(k_list, rel_err)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
4. Compare Consider the following two pictures. Compute their approximations (with the same rank, or relative error). What do you see? Explain results.
###Code
image_raw1 = Image.open("../dataset/mipt.jpg")
image_raw2 = Image.open("../dataset/simpsons.jpg")
image1 = np.array(image_raw1).astype(np.uint8)
image2 = np.array(image_raw2).astype(np.uint8)
plt.figure(figsize=(18, 6))
plt.subplot(1, 2, 1)
plt.imshow(image_raw1)
plt.title("One Picture")
plt.xticks(())
plt.yticks(())
plt.subplot(1, 2, 2)
plt.imshow(image_raw2)
plt.title("Another Picture")
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____
###Markdown
Same
###Code
# Your code is here
im1, s = compress(image1, 100)
im2, s = compress(image2, 100)
plt.figure(figsize=(18,6))
plt.subplot(1,2,1)
plt.imshow(Image.fromarray(im1, "RGB"))
plt.xticks(())
plt.yticks(())
plt.subplot(1,2,2)
plt.imshow(Image.fromarray(im2, "RGB"))
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____
###Markdown
Same relative error
###Code
k_list = range(5,500,10)
rel_err1 = []
rel_err2 = []
relative_error_threshold = 0.15
for k in tqdm(k_list):
image1_compressed, s = compress(image1, k)
image2_compressed, s = compress(image2, k)
relative_error_1 = np.linalg.norm(image1_compressed.astype(np.float64) - image1.astype(np.float64))
relative_error_1 /= np.linalg.norm(image1.astype(np.float64))
relative_error_2 = np.linalg.norm(image2_compressed.astype(np.float64) - image2.astype(np.float64))
relative_error_2 /= np.linalg.norm(image2.astype(np.float64))
rel_err1.append(relative_error_1)
rel_err2.append(relative_error_2)
# find the indices
idx1 = int(np.argwhere(np.diff(np.sign(np.array(rel_err1) - relative_error_threshold))).flatten())
idx2 = int(np.argwhere(np.diff(np.sign(np.array(rel_err2) - relative_error_threshold))).flatten())
print("K1 = {}; K2 = {}".format(k_list[idx1], k_list[idx2]))
plt.figure(figsize=(12,7))
plt.plot(k_list[idx1], rel_err1[idx1], 'ro')
plt.plot(k_list[idx2], rel_err2[idx2], 'ro')
plt.title("Rel err for 2 pics")
plt.xlabel("Rank")
plt.ylabel("Rel error val")
plt.plot(k_list, rel_err1, label="Image 1")
plt.plot(k_list, rel_err2, label="Image 2")
plt.plot(k_list, [relative_error_threshold]*len(k_list),":",)
plt.legend()
plt.show()
relative_error_threshold = 0.15
idx1 = int(np.argwhere(np.diff(np.sign(np.array(rel_err1) - relative_error_threshold))).flatten())
idx2 = int(np.argwhere(np.diff(np.sign(np.array(rel_err2) - relative_error_threshold))).flatten())
image1_compressed, s = compress(image1, k_list[idx1])
image2_compressed, s = compress(image2, k_list[idx2])
plt.figure(figsize=(18,6))
plt.subplot(1,2,1)
plt.imshow(Image.fromarray(image1_compressed, "RGB"))
plt.xticks(())
plt.yticks(())
plt.subplot(1,2,2)
plt.imshow(Image.fromarray(image2_compressed, "RGB"))
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____ |
calibration/notebooks/Calibration_SCE.ipynb | ###Markdown
Single catchment calibrationThis notebook demonstrates how you might calibrate the model to a single gauging station.
###Code
import awrams.calibration.calibrate as cal
from awrams.calibration.sce import SCEOptimizer,ProxyOptimizer
from awrams.models import awral
from awrams.utils import datetools as dt
import pandas as pd
import os
# %matplotlib inline
from matplotlib import pyplot
data_path = '../../test_data/calibration/'
cal_catchment= '421103' # '204007' # '421103'
time_period = dt.dates('1990 - 1995')
time_period
from awrams.utils import catchments
# Get the catchment as a spatial extent we can use as the bounds of the simulation
try:
db = catchments.CatchmentDB()
spatial = db.get_by_id(cal_catchment)
except ImportError as e:
print(e)
# read catchment extent from a pickle
import pickle
pkl = os.path.join(data_path,'extent_421103.pkl')
spatial = pickle.load(open(pkl,'rb'))
spatial.cell_count
# cal.set_model(model=awral)
def change_path_to_forcing(imap):
from awrams.utils.nodegraph import nodes
FORCING = {
'tmin' : ('temp_min*','temp_min_day'),
'tmax' : ('temp_max*','temp_max_day'),
'precip': ('rain_day*','rain_day'),
'solar' : ('solar*' ,'solar_exposure_day')
}
for k,v in FORCING.items():
imap.mapping[k+'_f'] = nodes.forcing_from_ncfiles(data_path,v[0],v[1],cache=True)
change_path_to_forcing(cal.input_map)
# Load the observed streamflow data
csv = os.path.join(data_path,'q_obs.csv')
qobs = pd.read_csv(csv,parse_dates=[0])
qobs = qobs.set_index(qobs.columns[0])
obs = qobs[cal_catchment]
# Find all the calibratable parameters in the mapping...
# We'll calibrate n all of them, but you could equally create a subset...
parameters = cal.get_parameter_df(cal.input_map.mapping)
parameters
# create the model evaluator...
evaluator = cal.RunoffEvaluator(time_period,spatial,obs)
evaluator.plot(evaluator.initial_results,period=dt.dates('1990'))
# Create the SCE instance...
sce = ProxyOptimizer(13,5,4,3,3,parameters,evaluator)
# and run it...
sce.run_optimizer()
sce.population.iloc[0]
evaluator.plot(evaluator.run_sim(sce.population.iloc[0]),period=dt.dates('1990'))
# run with seed population...
sce.run_optimizer(seed=sce.population.iloc[0])
evaluator.plot(evaluator.run_sim(sce.population.iloc[0]),period=dt.dates('1990'))
# kill the sce workers...cleanup child processes
sce.terminate_children()
###Output
_____no_output_____ |
analysis/sanidad/scrap_pdf_sanidad.ipynb | ###Markdown
Descarga de datos de Fallecimientos desde Sanidad Objetivo Vamos a hacer una lectura de los datos publicados en los informes diarios de datos de Covid19 publicados por Sanidad. Este es un [ejemplo](https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_265_COVID-19.pdf) publicado del 4 de Diciembre 2020. Manuel H. Arias [@walyt](https://twitter.com/walyt) [escovid19data](https://github.com/montera34/escovid19data)versión definitiva documentada, para ser publicar en el repo[@walyt](https://twitter.com/walyt) Código Tenemos un montón de librerias con las que vamos a trabajar, no he tenido ningún problema en instalar aquellas no disponibles en el entorno Anaconda con el que trabajo por medio de `pip install libreria` realizado desde un terminal abierto desde el entorno `env`.
###Code
import os.path as pth
import datetime as dt
import time
from glob import glob
import re
import pandas as pd
import numpy as np
import requests
from shutil import copyfile
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
from matplotlib import cm
import matplotlib.dates as mdates
import matplotlib.ticker as ticker
from matplotlib.dates import (YEARLY, MONTHLY, DateFormatter, WeekdayLocator, MonthLocator,DayLocator,
rrulewrapper, RRuleLocator, drange)
import seaborn as sns
import matplotlib.colors as colors
import numpy as np
from datetime import datetime
import seaborn as sns
%matplotlib inline
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.pdfpage import PDFPage
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from io import StringIO
###Output
_____no_output_____
###Markdown
Preparamos las expresiones regulares que nos ayudarán en la interpretación de la información que sacamos de los pdf. Definimos variables que nos ayuden en la gestión de los nombres de los ficheros.
###Code
datadir='datos_sanidad/'
URL_reg='https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_{:02d}_COVID-19.pdf'
###Output
_____no_output_____
###Markdown
Actualización día ZERO con el histórico Función de para descargar un fichero pdf, copiada del script de [@alfonsotwr](https://github.com/alfonsotwr/snippets/tree/master/covidia-cam)
###Code
def descarga(url,num):
print('Descargando:', url)
fn=datadir+str(num)+'.pdf'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
with requests.Session() as s:
r = s.get(url, headers=headers)
if r.status_code == requests.codes.ok:
with open(fn, 'wb') as fp:
fp.write(r.content)
else:
print ('Error con el ',num)
return True
###Output
_____no_output_____
###Markdown
Descarga de un rango o de un solo pdf Descargamos el rango completo en el caso de que sea la primera vez. Arrancamos con el 77, pues no pude descifrar el formato de los pdf anteriores.
###Code
for i in range(77,263):
descarga(URL_reg.format(i),i)
###Output
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_77_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_78_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_79_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_80_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_81_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_82_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_83_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_84_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_85_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_86_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_87_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_88_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_89_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_90_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_91_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_92_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_93_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_94_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_95_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_96_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_97_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_98_COVID-19.pdf
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_99_COVID-19.pdf
###Markdown
Descarga de un solo fichero Como ejemplo 266 corresponde al Viernes 4 Diciembre 2020, grabamos el fichero en el directorio local con el nº de orden del documento.
###Code
descarga(URL_reg.format(265),265)
###Output
Descargando: https://www.mscbs.gob.es/profesionales/saludPublica/ccayes/alertasActual/nCov/documentos/Actualizacion_265_COVID-19.pdf
###Markdown
Función para extraer el texto de la página correspondiente del informe pdf
###Code
# Extract PDF text using PDFMiner. Adapted from
# http://stackoverflow.com/questions/5725278/python-help-using-pdfminer-as-a-library
def pdf_to_text(pdfname, pagenum=None):
# PDFMiner boilerplate
rsrcmgr = PDFResourceManager()
sio = StringIO()
laparams = LAParams()
device = None
try:
device = TextConverter(rsrcmgr, sio, laparams=laparams)
interpreter = PDFPageInterpreter(rsrcmgr, device)
# Extract text
with open(pdfname, 'rb') as fp:
for i, page in enumerate(PDFPage.get_pages(fp)):
if pagenum is None or pagenum == i:
interpreter.process_page(page)
# Get text from StringIO
text = sio.getvalue()
finally:
# Cleanup
sio.close()
if device is not None:
device.close()
return text
###Output
_____no_output_____
###Markdown
Creamos el pandas datos, al que vamos incorporando los datos leidos
###Code
datos=pd.DataFrame()
###Output
_____no_output_____
###Markdown
Descarga del documento 235 a la actualidad
###Code
pattern=re.compile(r'(\n{1,2}\d*[,.]?\d+¥? ){19}')
pattern=re.compile(r'(\n\nTotal) ((\n{1,2}\d*[,.]?\d+¥? ){19})')
for i in range(235,266):
fn='datos_sanidad/'+str(i)+'.pdf'
fn1 = fn.replace('.pdf', '.txt')
text = pdf_to_text(fn, pagenum=4) #con que pagina queremos trabajar?
cadena=re.search(pattern,text).group(2)
print ('Documento {}-->'.format(i),cadena.replace('¥','').replace('\n','').split())
datos.loc[:,str(i)]=cadena.replace('¥','').replace('\n','').split()
###Output
Documento 235--> ['2.270', '1566', '391', '352', '267', '248', '3.338', '3.446', '5.960', '13', '1.719', '676', '859', '10.211', '11', '283', '668', '2.032', '442']
Documento 236--> ['2.314', '1598', '409', '356', '270', '248', '3.349', '3.502', '5.961', '16', '1.727', '687', '882', '10.247', '11', '288', '675', '2.038', '453']
Documento 237--> ['2.370', '1613', '414', '357', '273', '250', '3.367', '3.528', '5.968', '16', '1.737', '697', '884', '10.327', '11', '299', '687', '2.046', '454']
Documento 238--> ['2.402', '1620', '419', '357', '277', '251', '3.383', '3.549', '5.972', '16', '1.751', '701', '895', '10.350', '12', '300', '694', '2.061', '456']
Documento 239--> ['2.432', '1627', '427', '357', '278', '252', '3.405', '3.567', '5.977', '18', '1.762', '703', '895', '10.403', '12', '306', '699', '2.061', '458']
Documento 240--> ['2.470', '1654', '437', '362', '282', '252', '3.425', '3.599', '5.991', '19', '1.773', '714', '904', '10.419', '12', '317', '703', '2.082', '463']
Documento 241--> ['2.570', '1691', '466', '363', '286', '255', '3.437', '3.659', '6.001', '23', '1.782', '732', '934', '10.434', '12', '336', '722', '2.084', '470']
Documento 242--> ['2.602', '1706', '484', '364', '289', '255', '3.445', '3.673', '6.036', '26', '1.799', '743', '949', '10.438', '12', '341', '726', '2.136', '471']
Documento 243--> ['2.664', '1730', '497', '371', '294', '254', '3.414', '3.757', '7.073', '29', '1.857', '754', '957', '10.747', '13', '359', '736', '2.135', '477']
Documento 244--> ['2.695', '1753', '520', '372', '295', '257', '3.438', '3.810', '7.147', '29', '1.875', '760', '969', '10.821', '14', '378', '738', '2.136', '479']
Documento 245--> ['2.781', '1780', '534', '376', '296', '258', '3.452', '3.834', '7.225', '32', '1.886', '770', '977', '10.842', '24', '389', '747', '2.148', '482']
Documento 246--> ['2.885', '1860', '582', '378', '301', '264', '3.475', '3.911', '7.272', '33', '1.907', '786', '997', '10.850', '27', '403', '767', '2.155', '492']
Documento 247--> ['2.975', '1881', '603', '380', '306', '267', '3.491', '3.957', '7.299', '37', '1.935', '800', '1018', '10.859', '28', '425', '771', '2.227', '497']
Documento 248--> ['3.037', '1911', '622', '380', '308', '269', '3.504', '3.999', '7.311', '39', '1.970', '805', '1029', '10.946', '30', '442', '776', '2.229', '498']
Documento 249--> ['3.105', '1943', '649', '383', '310', '270', '3.525', '4.034', '7.340', '42', '1.983', '813', '1052', '11.011', '31', '452', '784', '2.230', '504']
Documento 250--> ['3.155', '1966', '661', '389', '311', '274', '3.545', '4.070', '7.362', '44', '2.013', '826', '1061', '11.053', '32', '472', '787', '2.242', '506']
Documento 251--> ['3.264', '2039', '732', '391', '312', '279', '3.574', '4.122', '7.382', '46', '2.028', '838', '1084', '11.082', '33', '486', '800', '2.245', '516']
Documento 252--> ['3.354', '2067', '739', '397', '318', '282', '3.584', '4.188', '7.410', '46', '2.055', '850', '1101', '11.111', '34', '501', '803', '2.331', '517']
Documento 253--> ['3.424', '2106', '755', '397', '319', '291', '3.597', '4.237', '7.426', '45', '2.108', '860', '1108', '11.158', '34', '512', '810', '2.331', '521']
Documento 254--> ['3.472', '2129', '775', '398', '321', '292', '3.635', '4.253', '7.450', '45', '2.124', '872', '1114', '11.181', '35', '521', '817', '2.334', '523']
Documento 255--> ['3.527', '2148', '800', '398', '322', '293', '3.659', '4.293', '7.495', '46', '2.168', '874', '1131', '11.201', '35', '543', '822', '2.338', '526']
Documento 256--> ['3.633', '2189', '857', '398', '326', '297', '3.687', '4.343', '7.596', '49', '2.188', '895', '1153', '11.211', '36', '555', '845', '2.343', '530']
Documento 257--> ['3.725', '2208', '883', '399', '333', '297', '3.693', '4.394', '7.692', '51', '2.237', '902', '1172', '11.250', '37', '566', '850', '2.446', '533']
Documento 258--> ['3.790', '2227', '904', '402', '334', '298', '3.722', '4.434', '7.793', '52', '2.262', '909', '1173', '11.279', '38', '583', '853', '2.447', '537']
Documento 259--> ['3.842', '2251', '929', '404', '336', '300', '3.738', '4.463', '7.850', '52', '2.288', '915', '1178', '11.349', '39', '589', '856', '2.452', '543']
Documento 260--> ['3.903', '2279', '954', '404', '339', '304', '3.761', '4.487', '7.911', '54', '2.298', '928', '1184', '11.352', '40', '597', '863', '2.467', '543']
Documento 261--> ['4.018', '2314', '1.004', '409', '344', '312', '3.775', '4.545', '7.911', '54', '2.330', '941', '1213', '11.359', '41', '604', '875', '2.470', '550']
Documento 262--> ['4.099', '2335', '1.021', '411', '344', '316', '3.786', '4.576', '8.003', '54', '2.371', '952', '1231', '11.369', '41', '611', '884', '2.554', '553']
Documento 263--> ['4.184', '2344', '1.036', '413', '345', '320', '3.810', '4.599', '8.067', '55', '2.395', '956', '1229', '11.380', '42', '615', '884', '2.556', '554']
Documento 264--> ['4.251', '2354', '1.054', '423', '346', '321', '3.819', '4.609', '8.102', '55', '2.418', '966', '1232', '11.413', '43', '620', '888', '2.564', '560']
Documento 265--> ['4.296', '2360', '1.073', '425', '346', '324', '3.825', '4.647', '8.126', '55', '2.444', '971', '1246', '11.426', '43', '623', '893', '2.568', '561']
###Markdown
Aquí tenemos una serie de días que han dado error en el escrapeo y no pude solucionar, luego introducimos los valores manualmente:
###Code
dia_234=[2183,1559,383,344,266,248,3336,3421,5958,13,1709,670,850,10155,11,279,666,2029,441]
dia_137=[1404,826,314,209,151,202,2945,1928,5587,4,1332,508,609,8691,2,148,490,1424,362]
datos.loc[:,'234']=dia_234
datos.loc[:,'137']=dia_137
###Output
_____no_output_____
###Markdown
Ahora metemos la serie antígua..del 77 al 234
###Code
texto1="Total"
texto2="\n\n"
for i in range(77,234): #original 100 a 234
fn='datos_sanidad/'+str(i)+'.pdf'
fn1 = fn.replace('.pdf', '.txt')
text = pdf_to_text(fn, pagenum=1)
#with open(fn1, 'w', encoding='utf-8') as fp:
#with open(fn1, 'w') as fp:
# fp.write(page1)
#with open(fn1) as fp:
# text = fp.read()
#lista=text.partition(texto1)[2].partition(texto1)[2].partition(texto1)[2].replace('\n','').split(' ')
if ((((i >= 122) & (i<=139))) & (i!=137)):
lista=text.partition(texto1)[2].replace('\n','').split(' ')
print (i,' -> ',lista[113])
#print (i,'ojo',' ->',lista[113:113+19])
datos.loc[:,str(i)]=lista[113:113+19]
elif (i==151):
lista=text.partition(texto1)[2].replace('\n','').split(' ')
print (i,' -> ',lista[127])
datos.loc[:,str(i)]=lista[127:127+19]
elif (i==154):
lista=text.partition(texto1)[2].replace('\n','').split(' ')
print (i,' -> ',lista[123])
datos.loc[:,str(i)]=lista[123:123+19]
elif (i!=137):
lista=text.partition(texto1)[2].partition(texto1)[2].partition(texto1)[2].replace('\n','').split(' ')
print (i,' -> ',lista[1])
datos.loc[:,str(i)]=lista[1:1+19]
#print (i,' -> ',lista[0],'-->',lista[1:20])
###Output
77 -> 912
78 -> 940
79 -> 967
80 -> 993
81 -> 1.013
82 -> 1.017
83 -> 1.050
84 -> 1.079
85 -> 1.107
86 -> 1.131
87 -> 1.145
88 -> 1.157
89 -> 1.168
90 -> 1.188
91 -> 1.207
92 -> 1.238
93 -> 1.253
94 -> 1.256
95 -> 1.263
96 -> 1.267
97 -> 1.281
98 -> 1.294
99 -> 1.301
100 -> 1.317
101 -> 1.320
102 -> 1.322
103 -> 1.326
104 -> 1.332
105 -> 1.336
106 -> 1.344
107 -> 1.355
108 -> 1.358
109 -> 1.358
110 -> 1.358
111 -> 1.371
112 -> 1.375
113 -> 1.377
114 -> 1.389
115 -> 1.391
116 -> 1.334
117 -> 1.404
118 -> 1.404
119 -> 1.404
120 -> 1.404
121 -> 1.404
122 -> 1.404
123 -> 1.404
124 -> 1.404
125 -> 1.404
126 -> 1.404
127 -> 1.404
128 -> 1.404
129 -> 1.404
130 -> 1.404
131 -> 1.404
132 -> 1.404
133 -> 1.404
134 -> 1.404
135 -> 1.404
136 -> 1.404
138 -> 1.404
139 -> 1.404
140 -> 1.404
141 -> 1.426
142 -> 1.426
143 -> 1.426
144 -> 1.426
145 -> 1.426
146 -> 1.426
147 -> 1.426
148 -> 1.426
149 -> 1.426
150 -> 1.426
151 -> 1.426
152 -> 1.427
153 -> 1.428
154 -> 1.428
155 -> 1.433
156 -> 1.433
157 -> 1.433
158 -> 1.434
159 -> 1.435
160 -> 1.435
161 -> 1.435
162 -> 1.435
163 -> 1.435
164 -> 1.435
165 -> 1.435
166 -> 1.435
167 -> 1.435
168 -> 1.435
169 -> 1.435
170 -> 1.436
171 -> 1.436
172 -> 1.435
173 -> 1.435
174 -> 1.435
175 -> 1.435
176 -> 1.435
177 -> 1.437
178 -> 1.437
179 -> 1.438
180 -> 1.438
181 -> 1.443
182 -> 1.443
183 -> 1.443
184 -> 1.445
185 -> 1.445
186 -> 1.452
187 -> 1.455
188 -> 1.456
189 -> 1.456
190 -> 1.460
191 -> 1.466
192 -> 1.471
193 -> 1.475
194 -> 1.476
195 -> 1.480
196 -> 1.493
197 -> 1.500
198 -> 1.506
199 -> 1.509
200 -> 1.516
201 -> 1.537
202 -> 1.545
203 -> 1.550
204 -> 1.566
205 -> 1.573
206 -> 1.593
207 -> 1.614
208 -> 1.631
209 -> 1.656
210 -> 1.662
211 -> 1.690
212 -> 1.702
213 -> 1.714
214 -> 1.714
215 -> 1.744
216 -> 1.780
217 -> 1.805
218 -> 1.827
219 -> 1.845
220 -> 1.867
221 -> 1.885
222 -> 1.933
223 -> 1.941
224 -> 1.965
225 -> 1.979
226 -> 2.020
227 -> 2.020
228 -> 2.053
229 -> 2.065
230 -> 2.094
231 -> 2.137
232 -> 2.176
233 -> 2.182
###Markdown
Filtramos las columnas en las fechas correctas: desde 77 hasta hoy, y formateamos el index con la denominación correcta de la CA
###Code
datos
datos=datos[[str(i) for i in range(77,266)]]
datos.index=['Andalucia','Aragon','Asturias','Baleares','Canarias','Cantabria','Castilla La Mancha',
'Castilla y Leon','Cataluña','Ceuta','C.Valenciana','Extremadura','Galicia','Madrid','Melilla','Murcia',
'Navarra','Pais Vasco','La Rioja']
datos=datos.applymap(lambda x: int(str(x).replace(".","")))
###Output
_____no_output_____
###Markdown
necesitamos un fichero adicional en el que se relacionen los nº de orden de los docs con las fechas en las que se publicaron
###Code
claves=pd.read_excel(datadir+'clave_numero_fecha.xlsx')
claves
#claves=claves.loc[claves.index[:]]
datos.columns=claves['Fecha']
###Output
_____no_output_____
###Markdown
Finalmente guardamos el `pandas`en un fichero csv:
###Code
datos.to_csv('datos_sanidad_matriz.csv')
###Output
_____no_output_____
###Markdown
Preparamos también una versión en formato tabla:
###Code
datos_tabla=datos.unstack().reset_index()[['level_1','Fecha',0]]
datos_tabla.columns=['Comunidad','Fecha','Fallecidos']
###Output
_____no_output_____
###Markdown
que guardamos también en el directorio local:
###Code
datos_tabla.to_csv('datos_sanidad_tabla.csv')
###Output
_____no_output_____ |
plugins/tidy3d/notebooks/02_GratingCoupler.ipynb | ###Markdown
Grating coupler Based on tidy3D example notebook on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/GratingCoupler.ipynb)
###Code
# get the most recent version of tidy3d
#pip install -q --upgrade tidy3d
# make sure notebook plots inline
%matplotlib inline
# basic imports
import numpy as np
import matplotlib.pylab as plt
# tidy3d imports
import tidy3d as td
import tidy3d.web as web
###Output
_____no_output_____
###Markdown
Problem SetupIn this example, we model a 3D grating coupler in a Silicon on Insulator (SOI) platform.A basic schematic of the design is shown below. The simulation is about 19um x 4um x 5um with a wavelength of 1.55um and takes about 1 minute to simulate 10,000 time steps.In the simulation, we inject a modal source into the waveguide and propagate it towards the grating structure. The radiation from the grating coupler is then measured with a near field monitor and we use a far field projection to inspect the angular dependence of the radiation.
###Code
# basic parameters (note, all length units are microns)
nm = 1e-3
wavelength = 1550 * nm
# resolution
grids_per_wavelength = 50.0
dl = wavelength / grids_per_wavelength
# waveguide
wg_width = 400 * nm
wg_height = 220 * nm
wg_length = 2 * wavelength
# surrounding
sub_height = 2.0
clad_height = 2.0
buffer = 0.5 * wavelength
# coupler
cp_width = 2 * wavelength
cp_length = 10 * wavelength
taper_length = 4 * wavelength
# sizes
Lx = buffer + wg_length + taper_length + cp_length
Ly = buffer + cp_width + buffer
Lz = sub_height + wg_height + clad_height
sim_size = [Lx, Ly, Lz]
# convenience variables to store center of coupler and waveguide
wg_center_x = +Lx/2 - buffer - (wg_length + taper_length)/2
cp_center_x = -Lx/2 + buffer + cp_length/2
wg_center_z = -Lz/2 + sub_height + wg_height/2
cp_center_z = -Lz/2 + sub_height + wg_height/2
# materials
Clad = td.Medium(epsilon=1.44**2)
Si = td.Medium(epsilon=3.47**2)
SiO2 = td.Medium(epsilon=1.44**2)
# source parameters
freq0 = td.constants.C_0 / wavelength
fwidth = freq0 / 10
run_time = 100 / fwidth
# PML layers
npml = 15
###Output
_____no_output_____
###Markdown
Mode SolveTo determine the pitch of the waveguide for a given design angle, we need to compute the effective index of the waveguide mode being coupled into. For this, we set up a simple simulation of the coupler region and use the mode solver to get the effective index. We will not run this simulation, we just add a ``ModeMonitor`` object in order to call the mode solver, ``sim.compute_modes()`` below, and get the effective index of the wide-waveguide region.
###Code
# grating parameters
design_theta_deg = 30
design_theta_rad = np.pi * design_theta_deg / 180
grating_height = 70 * nm
# do a mode solve to get neff of the coupler
sub = td.Box(
center=[0, 0, -Lz/2],
size=[td.inf, td.inf, 2 * sub_height],
material=SiO2,
name='substrate')
clad = td.Box(
center=[0, 0, Lz/2],
size=[td.inf, td.inf, td.inf],
material=SiO2,
name='clad')
cp = td.Box(
center=[0, 0, cp_center_z-grating_height/4],
size=[td.inf, cp_width, wg_height-grating_height/2],
material=Si,
name='coupler')
mode_mnt = td.ModeMonitor(
center=[0, 0, 0],
size=[0, 8*cp_width, 8*wg_height],
freqs=freq0)
sim = td.Simulation(
size=sim_size,
mesh_step=[dl, dl, dl],
structures=[clad, sub, cp],
sources=[],
monitors=[mode_mnt],
run_time=run_time,
pml_layers=[npml, npml, npml])
sim.viz_mat_2D(normal='x');
sim.viz_eps_2D(normal='x', cbar=True)
# Compute and visualize the first two modes of the monitor
sim.compute_modes(mode_mnt, Nmodes=2)
sim.viz_modes(mode_mnt)
# Get the data for the mode corresponding to frequency index 0 and mode index 0
mode = sim.data(mode_mnt)["modes"][0][0]
neff = mode.neff
print(f'\n\neffective index of coupler region, mode 0 = {neff}')
###Output
_____no_output_____
###Markdown
Create SimulationNow we set up the grating coupler to simulate in Tidy3D.
###Code
# gratings
design_theta_deg = 10
design_theta_rad = np.pi * design_theta_deg / 180
pitch = wavelength / (neff - np.sin(design_theta_rad))
grating_length = pitch / 2.0
num_gratings = int(cp_length / pitch)
sub = td.Box(
center=[0, 0, -Lz/2],
size=[td.inf, td.inf, 2 * sub_height],
material=SiO2,
name='substrate')
wg = td.Box(
center=[wg_center_x, 0, wg_center_z],
size=[buffer + wg_length + taper_length + cp_length/2, wg_width, wg_height],
material=Si,
name='waveguide')
cp = td.Box(
center=[cp_center_x, 0, cp_center_z],
size=[cp_length, cp_width, wg_height],
material=Si,
name='coupler')
tp = td.PolySlab(
vertices=[
[cp_center_x + cp_length/2 + taper_length, + wg_width/2],
[cp_center_x + cp_length/2 + taper_length, - wg_width/2],
[cp_center_x + cp_length/2, - cp_width/2],
[cp_center_x + cp_length/2, + cp_width/2]],
z_cent=wg_center_z,
z_size=wg_height,
material=Si,
name='taper')
grating_left_x = cp_center_x - cp_length/2
gratings = [
td.Box(
center=[grating_left_x + (i + 0.5) * pitch, 0, cp_center_z + wg_height/2 - grating_height/2],
size=[grating_length, cp_width, grating_height],
material=Clad,
name=f'{i}th_grating')
for i in range(num_gratings)]
mode_source = td.ModeSource(
td.GaussianPulse(freq0, fwidth, phase=0),
center=[Lx/2 - buffer, 0, cp_center_z],
size=[0, 8*wg_width, 8*wg_height],
direction='backward',
amplitude=1.0,
name='modal_source')
# distance to near field monitor
nf_offset = 50 * nm
plane_monitor = td.FreqMonitor(
center=[0, 0, cp_center_z],
size=[Lx, Ly, 0],
freqs=freq0,
name='full_domain_fields')
rad_monitor = td.FreqMonitor(
center=[0, 0, 0],
size=[Lx, 0, Lz],
freqs=freq0,
name='full_domain_fields')
near_field_monitor = td.FreqMonitor(
center=[cp_center_x, 0, cp_center_z + wg_height/2 + nf_offset],
size=[cp_length, cp_width, 0],
freqs=freq0,
name='radiated_near_fields')
sim = td.Simulation(
size=sim_size,
mesh_step=[dl, dl, dl],
structures=[sub, wg, cp, tp] + gratings,
sources=[mode_source],
monitors=[plane_monitor, rad_monitor, near_field_monitor],
run_time=run_time,
pml_layers=[npml, npml, npml])
fig, axes = plt.subplots(1, 3, tight_layout=True, figsize=(14, 3))
for val, pos, ax in zip('xyz', (0, 0, -Lz/2 + sub_height + wg_height/2), axes):
sim.viz_eps_2D(normal=val, position=pos, ax=ax, cbar=True)
ax1, ax2 = sim.viz_source(mode_source)
ax1.set_xlim((0, 0.1e-12)) # note the pulse extends far beyond this time, adjust lims to inspect
plt.show()
sim.compute_modes(mode_source, Nmodes=2)
sim.viz_modes(mode_source)
# Mode 0 is the Ey dominant mode, so we choose that
mode_ind = 0
sim.set_mode(mode_source, mode_ind)
###Output
_____no_output_____
###Markdown
Run SimulationRun the simulation and plot the field patterns
###Code
# create a project, upload to our server to run
project = web.new_project(sim.export(), task_name='grating_coupler')
task_id = project['taskId']
web.monitor_project(task_id)
# download the results and load into the original simulation
print('downloading results...')
web.download_results(task_id, target_folder='out')
print('done\n')
sim.load_results('out/monitor_data.hdf5')
with open("out/tidy3d.log") as f:
print(f.read())
fig, axes = plt.subplots(3, 1, tight_layout=True, figsize=(10, 8))
for monitor, cpmp, ax in zip([plane_monitor, rad_monitor, near_field_monitor], 'yyy', axes):
sim.viz_field_2D(monitor, comp=cpmp, ax=ax, cbar=True)
###Output
_____no_output_____
###Markdown
Far Field ProjectionNow we use the Near2Far feature of Tidy3D to compute the anglular dependence of the far field scattering based on the near field monitor.
###Code
# create range of angles to probe (note: polar coordinates, theta = 0 corresponds to vertical (z axis))
num_angles = 1101
thetas = np.linspace(-np.pi/2, np.pi/2, num_angles)
# make a near field to far field projector with the near field monitor data
n2f = td.Near2Far(sim.data(near_field_monitor))
# loop through angles and record the scattered cross section
Ps = np.zeros(num_angles)
for i in range(num_angles):
Ps[i] = n2f.get_radar_cross_section(thetas[i], 0.0)
# plot the angle dependence
fig, ax = plt.subplots(subplot_kw={'projection': 'polar'}, figsize=(5,5))
ax.plot(thetas, Ps, label='measured')
ax.plot([design_theta_rad, design_theta_rad], [0, np.max(Ps)*0.7], 'r--', alpha=0.8, label='design angle')
ax.set_theta_zero_location("N")
ax.set_yticklabels([])
ax.set_title("Scattered Cross-section (arb. units)", va='bottom')
plt.legend()
plt.show()
theta_expected = np.arcsin(np.abs(neff - wavelength / pitch))
print(f'expect angle of {(theta_expected * 180 / np.pi):.2f} degrees')
i_max = np.argmax(Ps)
print(f'got maximum angle of {(thetas[i_max] * 180 / np.pi):.2f} degrees')
###Output
_____no_output_____ |
Python_Scripts/bert_IMDB.ipynb | ###Markdown
Bert : model fine-tune以Imdb資料集作為範例,呈現如何以外部資料集做BERT model的fine-tune。此部分只做了training與testing,未使用到run_classifier.py內do_eval的功能 Prerequisites- 需先下載好pretrain model checkpoint, vocab_txt, bert_model.config檔案並放置在'bert_model'資料夾內 Import packages
###Code
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.utils import shuffle
os.chdir('../')
import tokenization
from run_classifier import *
###Output
/opt/conda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
IMDB sentiment dataset
###Code
# download data from stanford AI Lab
# source : http://ai.stanford.edu/~amaas/data/sentiment/
if not os.path.exists('aclImdb'):
!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar zxf aclImdb_v1.tar.gz
!rm aclImdb_v1.tar.gz
def load_file(dir_path):
article_list = []
for file in os.listdir(dir_path):
with open(os.path.join(dir_path,file),'r') as f:
article_list.append(f.readlines()[0])
return article_list
def write_tf_record(output_file_name, label_list, vocab_file, x, y = None, do_lower_case = True, max_seq_length = 128):
if y!=None:
input_data = zip(x, y)
else:
input_data = x
examples = []
for i, (text , label) in enumerate(input_data):
text_a = tokenization.convert_to_unicode(text)
examples.append(InputExample(guid = i, text_a = text_a, text_b = None, label = label))
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
file_based_convert_examples_to_features(examples, label_list, max_seq_length, tokenizer, output_file_name)
# read data
pos_train_data = load_file('./aclImdb/train/pos')
neg_train_data = load_file('./aclImdb/train/neg')
pos_test_data = load_file('./aclImdb/test/pos')
neg_test_data = load_file('./aclImdb/test/neg')
# convert data to tf_record file
'''settings'''
do_lower_case = True
max_seq_length = 128
vocab_file='bert_model/vocab.txt'
train_file = 'tmp/train.tf_record'
test_file = 'tmp/test.tf_record'
''''''
# shuffle training-set
train_x = pos_train_data+neg_train_data
train_y = ['pos' for _ in pos_train_data]+['neg' for _ in neg_train_data]
train_x, train_y = shuffle(train_x, train_y)
# make directory if 'tmp' is not exist
if not os.path.exists('tmp'):
os.mkdir('tmp')
# write train.tf_record
write_tf_record(
output_file_name = train_file,
label_list = ['pos','neg'],
vocab_file = vocab_file,
x = train_x,
y = train_y)
# write test.tf_record
write_tf_record(
output_file_name = test_file,
label_list = ['pos','neg'],
vocab_file = vocab_file,
x = pos_test_data+neg_test_data,
y = ['pos' for _ in pos_test_data]+['neg' for _ in neg_test_data])
###Output
INFO:tensorflow:Writing example 0 of 25000
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 0
INFO:tensorflow:tokens: [CLS] hands down the worst movie i have ever seen . i thought nothing would ever det ##hro ##ne last action hero , but this does easily . the movie is about 3 single gu ##ys who meet on sun ##day ##s to discuss their sexual es ##cap ##ades from the weekend . a fourth gu ##y - who is married and - that used to be a part of the group shows up and talks about what he and his wife do . nothing works in this movie . the jo ##kes are not fun ##ny but they are repeated throughout the movie . the big kick ##er at the end of the movie is lau ##gha ##ble . avoid at all costs . [SEP]
INFO:tensorflow:input_ids: 101 27925 12935 10105 62006 18379 177 10529 17038 15652 119 177 18957 33338 10894 17038 10349 106543 10238 12469 14204 51670 117 10473 10531 15107 35024 119 10105 18379 10124 10978 124 11376 75980 12682 10479 23267 10135 42230 24558 10107 10114 71695 10455 19616 10196 93103 16013 10188 10105 43440 119 169 16918 75980 10157 118 10479 10124 13524 10111 118 10189 11031 10114 10347 169 10668 10108 10105 11795 15573 10741 10111 56672 10978 12976 10261 10111 10226 14384 10149 119 33338 14009 10106 10531 18379 119 10105 12541 21885 10301 10472 41807 10756 10473 10689 10301 57026 15916 10105 18379 119 10105 22185 55321 10165 10160 10105 11572 10108 10105 18379 10124 27207 102121 11203 119 33253 10160 10435 34495 119 102 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: neg (id = 1)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 1
INFO:tensorflow:tokens: [CLS] in celebration of earth day dis ##ney has released the film " earth " . stop ##ping far short of any strid ##ent message of g ##lo ##om and do ##om , we are treated to some excellent footage of animals in their habitats without feeling too bad about our ##sel ##ves . < br / > < br / > the stars of the show are a her ##d of ele ##pha ##nts , a family of polar bears and a w ##hale and its cal ##f . the narrative begins at the north pole and proceeds south until we reach the trop ##ics , all the while being introduced to deniz ##ens of the various climat ##ic zones traverse ##d . < br / [SEP]
INFO:tensorflow:input_ids: 101 10106 69173 10108 39189 11940 27920 19029 10393 11539 10105 10458 107 39189 107 119 20517 15398 13301 13716 10108 11178 106743 11405 30514 10108 175 10715 10692 10111 10149 10692 117 11951 10301 45369 10114 11152 50337 67953 10108 22528 10106 10455 51897 13663 61362 16683 15838 10978 17446 12912 13136 119 133 33989 120 135 133 33989 120 135 10105 20756 10108 10105 11897 10301 169 10485 10162 10108 12637 37590 14073 117 169 11365 10108 45844 77911 10111 169 191 39149 10111 10474 25923 10575 119 10105 57265 26462 10160 10105 12756 21326 10111 105309 13144 11444 11951 24278 10105 27830 16981 117 10435 10105 11371 11223 17037 10114 97777 12457 10108 10105 13547 60733 11130 20437 53885 10162 119 133 33989 120 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 2
INFO:tensorflow:tokens: [CLS] offering a killer com ##bo of terrible writing , terrible acting and terrible direction , it ' s a tos ##su ##p whether kin ##jit ##e : for ##bidden subjects is offensive ##ly bad or just hil ##ario ##usly bad . it ' s almost as if someone ran a competition to make the sl ##ea ##zie ##st , seed ##iest can ##non film . as if a g ##lance at a cast list including characters like ' les ##bian pe ##do ##phile , ' ' per ##verte ##d gent ##leman , ' ' por ##no actress ' were ##n ' t enough , it ' s your only chance to see char ##les brons ##on ' s cop throw a low ##life on a bed [SEP]
INFO:tensorflow:input_ids: 101 42281 169 61976 10212 11790 10108 70032 17637 117 70032 25086 10111 70032 15599 117 10271 112 187 169 84686 12892 10410 21883 37403 47817 10112 131 10142 71810 38567 10124 31820 10454 15838 10345 12820 48989 16780 61289 15838 119 10271 112 187 17122 10146 12277 30455 17044 169 16622 10114 13086 10105 38523 11233 14548 10562 117 49282 66820 10944 17518 10458 119 10146 12277 169 175 61883 10160 169 18922 13416 11198 19174 11850 112 10152 42041 11161 10317 86247 117 112 112 10178 73918 10162 63991 93302 117 112 112 10183 10343 24268 112 10309 10115 112 188 21408 117 10271 112 187 20442 10893 27893 10114 12888 101328 11268 83298 10263 112 187 35691 73696 169 15626 57156 10135 169 30113 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: neg (id = 1)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 3
INFO:tensorflow:tokens: [CLS] ch ##ris , an adopted son of a moral family , a loser whom works at the school newspaper with kat ##e ( ch ##risti ##ne lakin from of the aw ##ful sugar ##y " step by step " show of the now than ##k ##fully def ##unct ab ##c ' s t ##gi ##f line - up ) , finds out that he ' s just inherited a por ##n empire from his biological parents . he lose ##s sight of what true friendship and love is and bl ##ah bl ##ah some other non ##sens ##e . he also has to conte ##nd with an uncle who wants control of the family business and a shift ##y lawyer ( ar ##n ' t they [SEP]
INFO:tensorflow:input_ids: 101 18643 12125 117 10151 24726 10312 10108 169 23680 11365 117 169 55526 18104 14009 10160 10105 11393 22047 10169 27689 10112 113 18643 79846 10238 85816 10188 10108 10105 56237 14446 60390 10157 107 31877 10155 31877 107 11897 10108 10105 11858 11084 10174 42920 100745 108647 11357 10350 112 187 188 11210 10575 12117 118 10741 114 117 31478 10950 10189 10261 112 187 12820 62929 169 10183 10115 34873 10188 10226 48806 17293 119 10261 48742 10107 78327 10108 12976 22024 74447 10111 16138 10124 10111 21484 12257 21484 12257 11152 10684 10446 59077 10112 119 10261 10379 10393 10114 26777 11534 10169 10151 49121 10479 45769 12608 10108 10105 11365 14155 10111 169 51467 10157 38055 113 10456 10115 112 188 10689 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: neg (id = 1)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 4
INFO:tensorflow:tokens: [CLS] i got to see this film at a pre ##view and was da ##zzle ##d by it . it ' s not the typical romantic comedy . i can ' t remember lau ##ghing so hard at a film and yet being moved by it . the lau ##gh ##s aren ' t ga ##gs here - - they ' re observations , lau ##gh ##s of recognition , little shock ##s of " o ##h , my god , i thought i was the only one who felt that way ! " i won ' t give away the plot , which is more than just " gu ##y falls in love with his brother ' s girlfriend . " the whole family plays a [SEP]
INFO:tensorflow:input_ids: 101 177 19556 10114 12888 10531 10458 10160 169 12229 30512 10111 10134 10143 75484 10162 10155 10271 119 10271 112 187 10472 10105 36772 57349 25737 119 177 10944 112 188 93161 27207 90427 10380 19118 10160 169 10458 10111 21833 11223 13059 10155 10271 119 10105 27207 15774 10107 99045 112 188 11887 15703 19353 118 118 10689 112 11639 39544 117 27207 15774 10107 10108 31477 117 16745 62868 10107 10108 107 183 10237 117 15127 22009 117 177 18957 177 10134 10105 10893 10464 10479 24666 10189 13170 106 107 177 11367 112 188 18090 14942 10105 32473 117 10319 10124 10798 11084 12820 107 75980 10157 35017 10106 16138 10169 10226 15739 112 187 77877 119 107 10105 21047 11365 17724 169 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:Writing example 10000 of 25000
INFO:tensorflow:Writing example 20000 of 25000
INFO:tensorflow:Writing example 0 of 25000
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 0
INFO:tensorflow:tokens: [CLS] this is one of my all time favorite che ##ap , corn ##y , va ##mpi ##re b movies . < br / > < br / > cal ##vin klein under ##wear model . . . o ##h , i mean , st ##efa ##n the good va ##mpi ##re , returns to trans ##yl ##vania to as ##cend the throne of va ##mpi ##ric royal ##ty , but mani ##cure - im ##pair ##ed and et ##ernal ##ly dro ##olin ##g half brother radu has other plans . having killed their father the va ##mpi ##re king , radu now sets his sight ##s on st ##efa ##n , st ##efa ##n ' s new mortal girlfriend mich ##elle and her two pretty friends [SEP]
INFO:tensorflow:input_ids: 101 10531 10124 10464 10108 15127 10435 10635 55768 10262 16070 117 93599 10157 117 10321 35407 10246 170 39129 119 133 33989 120 135 133 33989 120 135 25923 15478 29185 10571 100629 13192 119 119 119 183 10237 117 177 36110 117 28780 67712 10115 10105 15198 10321 35407 10246 117 38302 10114 37241 27652 40207 10114 10146 89387 10105 53409 10108 10321 35407 18570 23954 11195 117 10473 52321 55888 118 10211 110547 10336 10111 10131 69966 10454 33741 52280 10240 13877 15739 83177 10393 10684 18195 119 13677 15875 10455 13194 10105 10321 35407 10246 20636 117 83177 11858 23597 10226 78327 10107 10135 28780 67712 10115 117 28780 67712 10115 112 187 10751 97952 77877 52866 14000 10111 10485 10551 108361 21997 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 1
INFO:tensorflow:tokens: [CLS] this is , without a doubt , one of the most accomplished debut films for any director . the movie is only 90 minutes long , but manages to say just about everything about life and death . not much action , and dialogue is minimal , but the movie flows perfect ##ly and demands your attention due to the won ##der ##fully natural feel of everything going on . the performances by the leads are perfect ##ion , and even some supporting characters get strong emotional scenes . the movie will be somewhat lost on today ' s modern audience , but this is one that everyone ou ##ght to see . ref ##res ##hing ##ly uns ##enti ##mental and hon ##est , this is [SEP]
INFO:tensorflow:input_ids: 101 10531 10124 117 13663 169 86697 117 10464 10108 10105 10992 83251 13424 14280 10142 11178 12461 119 10105 18379 10124 10893 10919 15304 11695 117 10473 75923 10114 23763 12820 10978 42536 10978 12103 10111 12557 119 10472 13172 14204 117 10111 51077 10124 57284 117 10473 10105 18379 41271 43477 10454 10111 64886 20442 21341 10850 10114 10105 11367 11304 42920 13409 38008 10108 42536 19090 10135 119 10105 22744 10155 10105 34868 10301 43477 11046 117 10111 13246 11152 32403 19174 15329 18093 59995 32483 119 10105 18379 11337 10347 43203 14172 10135 18745 112 187 13456 26070 117 10473 10531 10124 10464 10189 48628 10431 20687 10114 12888 119 48056 11234 30809 10454 15826 21688 51761 10111 14923 13051 117 10531 10124 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 2
INFO:tensorflow:tokens: [CLS] it was sur ##pris ##ing that a silent film could be so easy to watch . the economy with which it has been edited and the films structure itself are the main elements that contribute to this . < br / > < br / > the film really capture ##s the spirit of the revolution that it is dealing with - you really sy ##mpa ##thi ##se with the sail ##ors and citizens . of course , this film has it ' s own agenda , but as it is a practical ##ly red ##undan ##t cause , it can be viewed as a piece of entertainment in a much clear ##er sense . < br / > < br / > the tension created [SEP]
INFO:tensorflow:input_ids: 101 10271 10134 10326 45666 10230 10189 169 66720 10458 12174 10347 10380 44346 10114 34481 119 10105 27570 10169 10319 10271 10393 10590 27423 10111 10105 14280 13926 17587 10301 10105 12126 17464 10189 72484 10114 10531 119 133 33989 120 135 133 33989 120 135 10105 10458 30181 32083 10107 10105 41576 10108 10105 48336 10189 10271 10124 73082 10169 118 13028 30181 12261 31285 53504 10341 10169 10105 83595 16379 10111 29812 119 10108 15348 117 10531 10458 10393 10271 112 187 12542 70231 117 10473 10146 10271 10124 169 52940 10454 10680 83722 10123 15311 117 10271 10944 10347 51371 10146 169 26767 10108 38642 10106 169 13172 24866 10165 15495 119 133 33989 120 135 133 33989 120 135 10105 55027 13745 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 3
INFO:tensorflow:tokens: [CLS] i just wanted to say that when i was young my favorite t . v show back in the day was night heat . i loved the characters and the plot of the show . i thought that it was an excellent show and still do to this day . i enjoy watching the re ##run ##s and i am a big fan . i love the way the characters played off one another . i would always stay up late to watch my favorite show with my mother who also was a big fan . now i can enjoy watching my show again and listening to the theme song . which i thought was a cool song for the show . my favorite characters were [SEP]
INFO:tensorflow:input_ids: 101 177 12820 22591 10114 23763 10189 10841 177 10134 14739 15127 55768 188 119 190 11897 12014 10106 10105 11940 10134 16903 33955 119 177 82321 10105 19174 10111 10105 32473 10108 10105 11897 119 177 18957 10189 10271 10134 10151 50337 11897 10111 12647 10149 10114 10531 11940 119 177 84874 84532 10105 11639 35794 10107 10111 177 10392 169 22185 10862 119 177 16138 10105 13170 10105 19174 11553 11898 10464 12864 119 177 10894 19540 29597 10741 13002 10114 34481 15127 55768 11897 10169 15127 15293 10479 10379 10134 169 22185 10862 119 11858 177 10944 84874 84532 15127 11897 13123 10111 109130 10114 10105 26648 12011 119 10319 177 18957 10134 169 67420 12011 10142 10105 11897 119 15127 55768 19174 10309 102
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:*** Example ***
INFO:tensorflow:guid: 4
INFO:tensorflow:tokens: [CLS] my wife and i find this movie to be a won ##der ##ful pick - me - up when we need to have a good lau ##gh - the conflict between some characters and the rep ##ore between others make this a sure fire comedy relief . i am so looking forward to this movie coming on d ##vd so i can replace my well watched v ##hs . [SEP]
INFO:tensorflow:input_ids: 101 15127 14384 10111 177 17860 10531 18379 10114 10347 169 11367 11304 14446 36833 118 10911 118 10741 10841 11951 17367 10114 10529 169 15198 27207 15774 118 10105 24620 10948 11152 19174 10111 10105 76456 13024 10948 14633 13086 10531 169 62452 13559 25737 31276 119 177 10392 10380 34279 23307 10114 10531 18379 23959 10135 172 54685 10380 177 10944 37156 15127 11206 92147 190 22394 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
INFO:tensorflow:label: pos (id = 0)
INFO:tensorflow:Writing example 10000 of 25000
INFO:tensorflow:Writing example 20000 of 25000
###Markdown
Model training- 設定訓練3個epochs
###Code
'''settings'''
vocab_file='bert_model/vocab.txt'
data_dir = 'tmp/'
bert_config_file = 'bert_model/bert_config.json'
init_checkpoint = 'bert_model/bert_model.ckpt'
train_file = 'tmp/train.tf_record'
output_dir = 'tmp/'
label_list = ['pos','neg']
num_train_examples = 25000 #len(pos_train_data+neg_train_data)
train_batch_size = 32
num_train_epochs = 3
learning_rate = 2e-5
iterations_per_loop = 1000
save_checkpoints_steps = 10000
eval_batch_size = 8
predict_batch_size = 8
max_seq_length = 128
warmup_proportion = 0.1
num_train_steps = int(
(num_train_examples) / train_batch_size * num_train_epochs)
num_warmup_steps = int(num_train_steps * warmup_proportion)
tpu_cluster_resolver = None
master = None
num_tpu_cores = None
use_tpu = False
''''''
# 以下主要將run_clssifier內的程式碼段落額外取出
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=init_checkpoint,
learning_rate=learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=use_tpu,
use_one_hot_embeddings=use_tpu)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=master,
model_dir=output_dir,
save_checkpoints_steps=save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=iterations_per_loop,
num_shards=num_tpu_cores,
per_host_input_for_training=is_per_host))
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
predict_batch_size=predict_batch_size)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
###Output
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f26d8bd7b70>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': 'tmp/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 10000, '_save_checkpoints_secs': None, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f26d7044b00>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running train on CPU
INFO:tensorflow:*** Features ***
INFO:tensorflow: name = input_ids, shape = (32, 128)
INFO:tensorflow: name = input_mask, shape = (32, 128)
INFO:tensorflow: name = label_ids, shape = (32,)
INFO:tensorflow: name = segment_ids, shape = (32, 128)
INFO:tensorflow:**** Trainable Variables ****
INFO:tensorflow: name = bert/embeddings/word_embeddings:0, shape = (119547, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/token_type_embeddings:0, shape = (2, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/position_embeddings:0, shape = (512, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/pooler/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/pooler/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = output_weights:0, shape = (2, 768)
INFO:tensorflow: name = output_bias:0, shape = (2,)
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into tmp/model.ckpt.
INFO:tensorflow:global_step/sec: 1.93235
INFO:tensorflow:examples/sec: 61.8352
INFO:tensorflow:global_step/sec: 2.05756
INFO:tensorflow:examples/sec: 65.8418
INFO:tensorflow:global_step/sec: 2.05014
INFO:tensorflow:examples/sec: 65.6046
INFO:tensorflow:global_step/sec: 2.04651
INFO:tensorflow:examples/sec: 65.4883
INFO:tensorflow:global_step/sec: 2.04525
INFO:tensorflow:examples/sec: 65.448
INFO:tensorflow:global_step/sec: 2.04444
INFO:tensorflow:examples/sec: 65.4222
INFO:tensorflow:global_step/sec: 2.04353
INFO:tensorflow:examples/sec: 65.3928
INFO:tensorflow:global_step/sec: 2.04369
INFO:tensorflow:examples/sec: 65.3982
INFO:tensorflow:global_step/sec: 2.04376
INFO:tensorflow:examples/sec: 65.4002
INFO:tensorflow:global_step/sec: 2.042
INFO:tensorflow:examples/sec: 65.344
INFO:tensorflow:global_step/sec: 2.04317
INFO:tensorflow:examples/sec: 65.3814
INFO:tensorflow:global_step/sec: 2.04294
INFO:tensorflow:examples/sec: 65.3742
INFO:tensorflow:global_step/sec: 2.0425
INFO:tensorflow:examples/sec: 65.3599
INFO:tensorflow:global_step/sec: 2.04278
INFO:tensorflow:examples/sec: 65.369
INFO:tensorflow:global_step/sec: 2.04186
INFO:tensorflow:examples/sec: 65.3397
INFO:tensorflow:global_step/sec: 2.04163
INFO:tensorflow:examples/sec: 65.332
INFO:tensorflow:global_step/sec: 2.04113
INFO:tensorflow:examples/sec: 65.316
INFO:tensorflow:global_step/sec: 2.04042
INFO:tensorflow:examples/sec: 65.2934
INFO:tensorflow:global_step/sec: 2.04012
INFO:tensorflow:examples/sec: 65.2838
INFO:tensorflow:global_step/sec: 2.04147
INFO:tensorflow:examples/sec: 65.3269
INFO:tensorflow:global_step/sec: 2.0419
INFO:tensorflow:examples/sec: 65.3407
INFO:tensorflow:global_step/sec: 2.04186
INFO:tensorflow:examples/sec: 65.3396
INFO:tensorflow:global_step/sec: 2.04138
INFO:tensorflow:examples/sec: 65.3242
INFO:tensorflow:Saving checkpoints for 2343 into tmp/model.ckpt.
INFO:tensorflow:Loss for final step: 0.32346368.
###Markdown
Model testing- 若以run_classifier.py範例程式執行do_predict時predict.tf_record並不會包含正確答案(只會預設某一類)。為了讓大家有所區隔(或是至少在搞錯時會噴錯誤出來XD)因此這邊故意取名為test.tf_record而非預設的predict.tf_record。
###Code
'''settings'''
bert_config_file = 'bert_model/bert_config.json'
predict_file = 'tmp/test.tf_record'
output_dir = 'tmp/'
init_checkpoint = 'tmp/model.ckpt-2343'
label_list = ['pos','neg']
train_batch_size = 32
learning_rate = 5e-5
iterations_per_loop = 1000
save_checkpoints_steps = None
eval_batch_size = 8
predict_batch_size = 8
max_seq_length = 128
num_train_steps = 1
num_warmup_steps = 1
tpu_cluster_resolver = None
master = None
num_tpu_cores = None
use_tpu = False
''''''
# 以下主要將run_clssifier內的程式碼段落額外取出
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=init_checkpoint,
learning_rate=learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=use_tpu,
use_one_hot_embeddings=use_tpu)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=master,
model_dir=output_dir,
save_checkpoints_steps=save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=iterations_per_loop,
num_shards=num_tpu_cores,
per_host_input_for_training=is_per_host))
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=train_batch_size,
eval_batch_size=eval_batch_size,
predict_batch_size=predict_batch_size)
predict_drop_remainder = False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file,
seq_length=max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
# write out prediction
output_predict_file = os.path.join(output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
tf.logging.info("***** Predict results *****")
for prediction in result:
output_line = "\t".join(
str(class_probability) for class_probability in prediction) + "\n"
writer.write(output_line)
###Output
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder.<locals>.model_fn at 0x7f26d8bd7c80>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: {'_model_dir': 'tmp/', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': None, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f266c566ef0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None), '_cluster': None}
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:***** Predict results *****
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Running infer on CPU
INFO:tensorflow:*** Features ***
INFO:tensorflow: name = input_ids, shape = (?, 128)
INFO:tensorflow: name = input_mask, shape = (?, 128)
INFO:tensorflow: name = label_ids, shape = (?,)
INFO:tensorflow: name = segment_ids, shape = (?, 128)
INFO:tensorflow:**** Trainable Variables ****
INFO:tensorflow: name = bert/embeddings/word_embeddings:0, shape = (119547, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/token_type_embeddings:0, shape = (2, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/position_embeddings:0, shape = (512, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/embeddings/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_0/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_1/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_2/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_3/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_4/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_5/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_6/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_7/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_8/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_9/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_10/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/query/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/query/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/key/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/key/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/value/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/self/value/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/attention/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/intermediate/dense/kernel:0, shape = (768, 3072), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/intermediate/dense/bias:0, shape = (3072,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/dense/kernel:0, shape = (3072, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/LayerNorm/beta:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/encoder/layer_11/output/LayerNorm/gamma:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/pooler/dense/kernel:0, shape = (768, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = bert/pooler/dense/bias:0, shape = (768,), *INIT_FROM_CKPT*
INFO:tensorflow: name = output_weights:0, shape = (2, 768), *INIT_FROM_CKPT*
INFO:tensorflow: name = output_bias:0, shape = (2,), *INIT_FROM_CKPT*
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from tmp/model.ckpt-2343
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
###Markdown
Let's see the result
###Code
#load test_result file
import pandas as pd
import numpy as np
dat = pd.read_csv('tmp/test_results.tsv', sep='\t', header = None)
prediction = np.argmax(dat.as_matrix(),axis=1)
dat.tail()
###Output
_____no_output_____
###Markdown
Get answer from test.tf_record file為了呈現tf_record檔案應該如何被讀取,這邊我們直接從test.tf_record讀取正確的label_ids(事實上也可以從資料前處理端直接將正確答案拿過來)。
###Code
import tensorflow as tf
seq_length = 128
out_label = []
# TF檔
filename = 'tmp/test.tf_record'
# 產生文件名隊列
filename_queue = tf.train.string_input_producer([filename],
shuffle=False,
num_epochs=1)
name_to_features = {
"input_ids": tf.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.FixedLenFeature([seq_length], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
}
# 數據讀取器
reader = tf.TFRecordReader()
key, serialized_example = reader.read(filename_queue)
# 數據解析
data_features = tf.parse_single_example(
serialized_example,
features=name_to_features)
with tf.Session() as sess:
# 初始化是必要的動作
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
# 建立執行緒協調器
coord = tf.train.Coordinator()
# 啟動文件隊列,開始讀取文件
threads = tf.train.start_queue_runners(coord=coord)
try:
while not coord.should_stop():
[d] = sess.run([data_features])
out_label.append(d['label_ids'])
except tf.errors.OutOfRangeError:
print('Done!')
finally:
# 最後要記得把文件隊列關掉
coord.request_stop()
coord.join(threads)
###Output
Done!
###Markdown
Confusion matrix and accuracy
###Code
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true = out_label, y_pred = prediction)
print(cm)
print('accuracy : %.4f'%(np.sum(np.diag(cm))/np.sum(cm)))
###Output
[[10728 1772]
[ 1830 10670]]
accuracy : 0.8559
|
Data Modeling with Apache Cassandra/Project_1B.ipynb | ###Markdown
Part I. ETL Pipeline for Pre-Processing the Files Import Python packages
###Code
# Import Python packages
import pandas as pd
import cassandra
import re
import os
import glob
import numpy as np
import json
import csv
###Output
_____no_output_____
###Markdown
Creating list of filepaths to process original event csv data files
###Code
# checking your current working directory
print(os.getcwd())
# Get your current folder and subfolder event data
filepath = os.getcwd() + '/event_data'
# Create a for loop to create a list of files and collect each filepath
for root, dirs, files in os.walk(filepath):
# join the file path and roots with the subdirectories using glob
file_path_list = glob.glob(os.path.join(root,'*'))
#print(file_path_list)
###Output
/home/workspace
###Markdown
Processing the files to create the data file csv that will be used for Apache Casssandra tables
###Code
# initiating an empty list of rows that will be generated from each file
full_data_rows_list = []
# for every filepath in the file path list
for f in file_path_list:
# reading csv file
with open(f, 'r', encoding = 'utf8', newline='') as csvfile:
# creating a csv reader object
csvreader = csv.reader(csvfile)
next(csvreader)
# extracting each data row one by one and append it
for line in csvreader:
#print(line)
full_data_rows_list.append(line)
# uncomment the code below if you would like to get total number of rows
#print(len(full_data_rows_list))
# uncomment the code below if you would like to check to see what the list of event data rows will look like
#print(full_data_rows_list)
# creating a smaller event data csv file called event_datafile_full csv that will be used to insert data into the \
# Apache Cassandra tables
csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True)
with open('event_datafile_new.csv', 'w', encoding = 'utf8', newline='') as f:
writer = csv.writer(f, dialect='myDialect')
writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\
'level','location','sessionId','song','userId'])
for row in full_data_rows_list:
if (row[0] == ''):
continue
writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16]))
# check the number of rows in your csv file
with open('event_datafile_new.csv', 'r', encoding = 'utf8') as f:
print(sum(1 for line in f))
###Output
6821
###Markdown
Part II. Complete the Apache Cassandra coding portion of your project. Now you are ready to work with the CSV file titled event_datafile_new.csv, located within the Workspace directory. The event_datafile_new.csv contains the following columns: - artist - firstName of user- gender of user- item number in session- last name of user- length of the song- level (paid or free song)- location of the user- sessionId- song title- userIdThe image below is a screenshot of what the denormalized data should appear like in the **event_datafile_new.csv** after the code above is run: Begin writing your Apache Cassandra code in the cells below Creating a Cluster
###Code
# This should make a connection to a Cassandra instance your local machine
# (127.0.0.1)
from cassandra.cluster import Cluster
cluster = Cluster()
# To establish connection and begin executing queries, need a session
session = cluster.connect()
###Output
_____no_output_____
###Markdown
Create Keyspace
###Code
# Create a Keyspace
session.execute("""CREATE KEYSPACE IF NOT EXISTS sparkify
WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1}""")
###Output
_____no_output_____
###Markdown
Set Keyspace
###Code
# Set KEYSPACE to the keyspace specified above
session.set_keyspace('sparkify')
###Output
_____no_output_____
###Markdown
Now we need to create tables to run the following queries. Remember, with Apache Cassandra you model the database tables on the queries you want to run. Create tables to ask the following three questions (queries) on the data 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own'
###Code
## Query 1: Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4
session.execute("""CREATE TABLE IF NOT EXISTS music_app_history (sessionId int, itemInSession int, artist text, song text, length float, PRIMARY KEY (sessionId, itemInSession));""")
## Query 2: Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182
session.execute("""CREATE TABLE IF NOT EXISTS user_app_history (userId int, sessionId int, itemInSession int, artist text, song text, firstName text, lastName text, PRIMARY KEY (userId, sessionId, itemInSession));""")
## Query 3: Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own'
session.execute("""CREATE TABLE IF NOT EXISTS songplays (song text, firstName text, lastName text, userId int, PRIMARY KEY (song, userId));""")
file = 'event_datafile_new.csv'
with open(file, encoding = 'utf8') as f:
csvreader = csv.reader(f)
next(csvreader) # skip header
for line in csvreader:
query1 = "INSERT INTO music_app_history (sessionId, itemInSession, artist, song, length) VALUES (%s, %s, %s, %s, %s)"
query2 = "INSERT INTO user_app_history (userId, sessionId, itemInSession, artist, song, firstName, lastName) VALUES (%s, %s, %s, %s, %s, %s, %s)"
query3 = "INSERT INTO songplays (song, firstName, lastName, userId) VALUES (%s, %s, %s, %s)"
## For e.g., to INSERT artist_name and user first_name, you would change the code below to `line[0], line[1]`
artist, firstName, gender, itemInSession, lastName, length, level, location, sessionId, song, userId = line
session.execute(query1, (int(sessionId), int(itemInSession), artist, song, float(length)))
session.execute(query2, (int(userId), int(sessionId), int(itemInSession), artist, song, firstName, lastName))
session.execute(query3, (song, firstName, lastName, int(userId)))
###Output
_____no_output_____
###Markdown
Do a SELECT to verify that the data have been inserted into the table
###Code
## Perform a SELECT statement to verify the data was entered into the table
rows = session.execute("""SELECT artist, song, length from music_app_history WHERE sessionId = 338 AND itemInSession = 4""")
for (artist, song, length) in rows:
print(artist, song, length)
rows = session.execute("""SELECT artist, song, firstName, lastName from user_app_history WHERE userId = 10 AND sessionId = 182""")
for (artist, song, firstname, lastname) in rows:
print(artist, song, firstname, lastname)
rows = session.execute("""SELECT firstName, lastName FROM songplays WHERE song = 'All Hands Against His Own'""")
for row in rows:
print(row.firstname, row.lastname)
###Output
Jacqueline Lynch
Tegan Levine
Sara Johnson
###Markdown
Drop the tables before closing out the sessions
###Code
## Drop the table before closing out the sessions
session.execute("DROP TABLE music_app_history")
session.execute("DROP TABLE user_app_history")
session.execute("DROP TABLE songplays")
###Output
_____no_output_____
###Markdown
Close the session and cluster connection¶
###Code
session.shutdown()
cluster.shutdown()
###Output
_____no_output_____ |
ALPR/5_ALPR_using_WPOD_Net_LP_Tesseract_OCR.ipynb | ###Markdown
ALPR using WPOD-Net as LP Detector and Tesseract as OCR Introduction In this notebook, we will implement Automatic License Plate Recognition (ALPR) system composed by1. Vehicle Detection using **YOLOv2** network trained on PASCAL-VOC dataset1. License Plate (LP) Detection using **Warped Planar Object Detection Network** (WPOD-Net) proposed in **License Plate Detection and Recognition in Unconstrained Scenarios** by S. M. Silva and C. R. Jung [[paper](http://openaccess.thecvf.com/content_ECCV_2018/papers/Sergio_Silva_License_Plate_Detection_ECCV_2018_paper.pdf)] [[webpage](http://www.inf.ufrgs.br/~smsilva/alpr-unconstrained/)][[github](https://github.com/sergiomsilva/alpr-unconstrained)]1. Optical Character Recognition (OCR) using **Tesseract**The ALPR implementation process involve,1. Vehicle Detection``` 1.1 Download weights and config file of YOLOv2 network trained on PASCAL-VOC dataset 1.2 Utility functions 1.3 Detect vehicles```2. License Plate Detection``` 2.1 Download weights of pretrained WPOD-Net 2.2 Utility functions 2.3 Detect license plates```3. Optical Character Recognition``` 3.1 Install tesseract 3.2 Recognize characters```4. Inference``` 4.1 Download the test image 4.2 Utility functions 4.3 Infer on the test image 4.4 Display inference 4.5 Observations``` 1. Vehicle Detection Vehicle detection is the first step in ALPR system. Vehicles are one of the underlying objects present in many classical detection and recognition datasets, such as PASCAL-VOC, ImageNet, and COCO.We will use pretrained YOLOv2 network trained on PASCAL-VOC dataset to perform vehicle detection. We will use weights and config file of YOLOv2 from [here](http://www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/), which is same as used by S. M. Silva an author of **License Plate Detection and Recognition in Unconstrained Scenarios**. The model was trained for 20 different object classes. The full list of class names can be found [here](http://www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/voc.names).We will not perform any change or refinement to YOLOv2, just we will use the network as a black box, merging the outputs related to vehicles (i.e. cars and buses), and ignoring the other classes. 1.1 Download weights and config file of YOLOv2 network trained on PASCAL-VOC dataset
###Code
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/yolo-voc.cfg -P vehicle-detector/
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/voc.data -P vehicle-detector/
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/yolo-voc.weights -P vehicle-detector/
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/vehicle-detector/voc.names -P vehicle-detector/
###Output
_____no_output_____
###Markdown
1.2 Utility functions Here, we define few utility functions like,- `getOutputsNames`: Get the names of the output layers for given input neural network.- `postprocess`: to get rid of detected bounding box with low confidence- `drawPred:` to draw the predicted bounding box- `crop_region:` to crop out specified region from given input imageWe also define `Label` a bounding box class. All detected bounding boxs are stored as an object of this class.
###Code
#@title
# Get the names of the output layers
def getOutputsNames(net):
""" Get the names of the output layers.
Generally in a sequential CNN network there will be
only one output layer at the end. In the YOLOv3
architecture, there are multiple output layers giving
out predictions. This function gives the names of the
output layers. An output layer is not connected to
any next layer.
Args
net : neural network
"""
# Get the names of all the layers in the network
layersNames = net.getLayerNames()
# Get the names of the output layers, i.e. the layers with unconnected outputs
return [layersNames[i[0] - 1] for i in net.getUnconnectedOutLayers()]
import cv2 as cv
# Remove the bounding boxes with low confidence using non-maxima suppression
def postprocess(frame, outs, confThreshold, nmsThreshold=0.4):
frameHeight = frame.shape[0]
frameWidth = frame.shape[1]
classIds = []
confidences = []
boxes = []
# Scan through all the bounding boxes output from the network and keep only the
# ones with high confidence scores. Assign the box's class label as the class with the highest score.
classIds = []
confidences = []
boxes = []
predictions = []
for out in outs:
for detection in out:
scores = detection[5:]
classId = np.argmax(scores)
confidence = scores[classId]
if confidence > confThreshold:
center_x = int(detection[0] * frameWidth)
center_y = int(detection[1] * frameHeight)
width = int(detection[2] * frameWidth)
height = int(detection[3] * frameHeight)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
classIds.append(classId)
confidences.append(float(confidence))
boxes.append([left, top, width, height])
# Perform non maximum suppression to eliminate redundant overlapping boxes with
# lower confidences.
if nmsThreshold:
indices = cv.dnn.NMSBoxes(boxes, confidences, confThreshold, nmsThreshold)
else:
indices = [[x] for x in range(len(boxes))]
for i in indices:
i = i[0]
box = boxes[i]
left = box[0]
top = box[1]
width = box[2]
height = box[3]
predictions.append([classIds[i], confidences[i], [left, top, left + width, top + height]])
return predictions
import cv2 as cv
# Draw the predicted bounding box
def drawPred(frame, pred):
classId = pred[0]
conf = pred[1]
box = pred[2]
left, top, right, bottom = box[0], box[1], box[2], box[3]
# draw bounding box
cv.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 3)
import numpy as np
class Label:
def __init__(self,cl=-1,tl=np.array([0.,0.]),br=np.array([0.,0.]),prob=None):
self.__tl = tl
self.__br = br
self.__cl = cl
self.__prob = prob
def __str__(self):
return 'Class: %d, top_left(x:%f,y:%f), bottom_right(x:%f,y:%f)' % (self.__cl, self.__tl[0], self.__tl[1], self.__br[0], self.__br[1])
def copy(self):
return Label(self.__cl,self.__tl,self.__br)
def wh(self): return self.__br-self.__tl
def cc(self): return self.__tl + self.wh()/2
def tl(self): return self.__tl
def br(self): return self.__br
def tr(self): return np.array([self.__br[0],self.__tl[1]])
def bl(self): return np.array([self.__tl[0],self.__br[1]])
def cl(self): return self.__cl
def area(self): return np.prod(self.wh())
def prob(self): return self.__prob
def set_class(self,cl):
self.__cl = cl
def set_tl(self,tl):
self.__tl = tl
def set_br(self,br):
self.__br = br
def set_wh(self,wh):
cc = self.cc()
self.__tl = cc - .5*wh
self.__br = cc + .5*wh
def set_prob(self,prob):
self.__prob = prob
def crop_region(I,label,bg=0.5):
wh = np.array(I.shape[1::-1])
ch = I.shape[2] if len(I.shape) == 3 else 1
tl = np.floor(label.tl()*wh).astype(int)
br = np.ceil (label.br()*wh).astype(int)
outwh = br-tl
if np.prod(outwh) == 0.:
return None
outsize = (outwh[1],outwh[0],ch) if ch > 1 else (outwh[1],outwh[0])
if (np.array(outsize) < 0).any():
pause()
Iout = np.zeros(outsize,dtype=I.dtype) + bg
offset = np.minimum(tl,0)*(-1)
tl = np.maximum(tl,0)
br = np.minimum(br,wh)
wh = br - tl
Iout[offset[1]:(offset[1] + wh[1]),offset[0]:(offset[0] + wh[0])] = I[tl[1]:br[1],tl[0]:br[0]]
return Iout
###Output
_____no_output_____
###Markdown
1.3 Detect vehicles Let's define the `vehicle_detection` function which takes an image as input and return `Icars` list of cropped images of vehicles as well as `Lcars` list of bouding box around vehicles. We use `postprocess` utility function to get rid of detected bounding box with low confidence. The `postprocess` utility function internally uses `cv.dnn.NMSBoxes` which perform non maximum suppression to eliminate redundant overlapping boxes with lower confidences. We keep only those bounding boxs whose corresponding `classId` is either `car` (class number 6) or `bus` (class number 7), since this two `classId` are related to vehicles.We will use `vehicle_detection` function as first step in our ALPR system implementation.
###Code
# Import necessary modules
import cv2 as cv
import numpy as np
# Initialize the parameters
vehicle_threshold = .5
vehicle_weights = 'vehicle-detector/yolo-voc.weights'
vehicle_netcfg = 'vehicle-detector/yolo-voc.cfg'
# Load the model
vehicle_net = cv.dnn.readNetFromDarknet(vehicle_netcfg, vehicle_weights)
vehicle_net.setPreferableBackend(cv.dnn.DNN_BACKEND_OPENCV)
vehicle_net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)
def vehicle_detection(frame):
# Create a 4D blob from a frame.
blob = cv.dnn.blobFromImage(frame, 1/255, (416, 416), [0,0,0], 1, crop=False)
# Sets the input to the network
vehicle_net.setInput(blob)
# Runs the forward pass to get output of the output layers
outs = vehicle_net.forward(getOutputsNames(vehicle_net))
# Remove the bounding boxes with low confidence
R = postprocess(frame, outs, vehicle_threshold)
Icars = []
Lcars = []
if len(R):
WH = np.array(frame.shape[1::-1], dtype=float)
for i, r in enumerate(R):
# if classId in ['car', 'bus'] and confidence > vehicle_threshold
if r[0] in [6, 7] and r[1] > vehicle_threshold:
box = r[2]
x1,y1,x2,y2 = (np.array(r[2])/np.concatenate((WH,WH))).tolist()
tl = np.array([x1, y1])
br = np.array([x2, y2])
label = Label(0,tl,br)
Lcars.append(label)
Icar = crop_region(frame,label)
Icars.append(Icar.astype(np.uint8))
return Icars, Lcars
###Output
_____no_output_____
###Markdown
2. License Plate Detection License plates are intrinsically rectangular and planar objects, which are attached to vehicles for identification purposes. To take advantage of its shape, the author proposed a novel CNN called **Warped Planar Object Detection Network**.This network learns to detect LPs in a variety of different distortions, and regresses coefficients of an affine transformation that unwarps the distorted LP into a rectangular shape resembling a frontal view.![Fully convolutional detection of planar objects](https://www.dropbox.com/s/a9u69cgtsguemkg/WPOD.png?dl=1) WPOD-Net Architecture The proposed architecture has a total of 21 convolutional layers, where 14 are inside residual blocks. The size of all convolutional filters is fixed in 3 × 3. ReLU activations are used throughout the entire network, except in the detection block. There are 4 max pooling layers of size 2×2 and stride 2 that reduces the input dimensionality by a factor of 16. Finally, the detection block has two parallel convolutional layers: (i) one for inferring the probability, activated by a softmax function, and (ii) another for regressing the affine parameters, without activation.![WPOD-NET Architecture](https://www.dropbox.com/s/vjeiwilm6ntd8xm/WPOD_Net.png?dl=1) We will use latest version of weights of pretrained WPOD-Net from [here](http://www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/lp-detector), which is same as used by the author. 2.1 Download weights of pretrained WPOD-Net
###Code
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/lp-detector/wpod-net_update1.h5 -P lp-detector/
!wget -c -N www.inf.ufrgs.br/~smsilva/alpr-unconstrained/data/lp-detector/wpod-net_update1.json -P lp-detector/
###Output
_____no_output_____
###Markdown
2.2 Utility functions There are few utility functions which has been taken from the author [github](https://github.com/sergiomsilva/alpr-unconstrained). These utility functions are,- `nms`: to perform non maximum suppression- `IOU`: to calculate intersection over union for given two bounding box interm of pair of top-left and bottom-right corners - `IOU_label`: to calculate intersection over union for given two bounding box interm of an object of `Label` class- `find_T_matrix`: to calculate affine transformation for given set of source points and destination points - `reconstruct`: to perform license plate rectification using calculated affine transformation- `detect_lp`: to perform license plate detection using WPOD-Net
###Code
#@title
import cv2 as cv
import numpy as np
import time
class DLabel (Label):
def __init__(self,cl,pts,prob):
self.pts = pts
tl = np.amin(pts,1)
br = np.amax(pts,1)
Label.__init__(self,cl,tl,br,prob)
def nms(Labels,iou_threshold=.5):
SelectedLabels = []
Labels.sort(key=lambda l: l.prob(),reverse=True)
for label in Labels:
non_overlap = True
for sel_label in SelectedLabels:
if IOU_labels(label,sel_label) > iou_threshold:
non_overlap = False
break
if non_overlap:
SelectedLabels.append(label)
return SelectedLabels
def IOU(tl1,br1,tl2,br2):
wh1,wh2 = br1-tl1,br2-tl2
assert((wh1>=.0).all() and (wh2>=.0).all())
intersection_wh = np.maximum(np.minimum(br1,br2) - np.maximum(tl1,tl2),0.)
intersection_area = np.prod(intersection_wh)
area1,area2 = (np.prod(wh1),np.prod(wh2))
union_area = area1 + area2 - intersection_area;
return intersection_area/union_area
def IOU_labels(l1,l2):
return IOU(l1.tl(),l1.br(),l2.tl(),l2.br())
def getRectPts(tlx,tly,brx,bry):
return np.matrix([[tlx,brx,brx,tlx],[tly,tly,bry,bry],[1.,1.,1.,1.]],dtype=float)
def find_T_matrix(pts,t_pts):
A = np.zeros((8,9))
for i in range(0,4):
xi = pts[:,i];
xil = t_pts[:,i];
xi = xi.T
A[i*2, 3:6] = -xil[2]*xi
A[i*2, 6: ] = xil[1]*xi
A[i*2+1, :3] = xil[2]*xi
A[i*2+1, 6: ] = -xil[0]*xi
[U,S,V] = np.linalg.svd(A)
H = V[-1,:].reshape((3,3))
return H
def reconstruct(Iorig,I,Y,out_size,threshold=.9):
net_stride = 2**4
side = ((208. + 40.)/2.)/net_stride # 7.75
Probs = Y[...,0]
# print Probs
Affines = Y[...,2:]
rx,ry = Y.shape[:2]
# print Y.shape
ywh = Y.shape[1::-1]
# print ywh
iwh = np.array(I.shape[1::-1],dtype=float).reshape((2,1))
# print iwh
xx,yy = np.where(Probs>threshold)
# print xx,yy
WH = getWH(I.shape)
MN = WH/net_stride
# print MN
vxx = vyy = 0.5 #alpha
base = lambda vx,vy: np.matrix([[-vx,-vy,1.],[vx,-vy,1.],[vx,vy,1.],[-vx,vy,1.]]).T
labels = []
for i in range(len(xx)):
y,x = xx[i],yy[i]
affine = Affines[y,x]
prob = Probs[y,x]
mn = np.array([float(x) + .5,float(y) + .5])
A = np.reshape(affine,(2,3))
A[0,0] = max(A[0,0],0.)
A[1,1] = max(A[1,1],0.)
# print A
pts = np.array(A*base(vxx,vyy)) #*alpha
# print pts
pts_MN_center_mn = pts*side
pts_MN = pts_MN_center_mn + mn.reshape((2,1))
pts_prop = pts_MN/MN.reshape((2,1))
labels.append(DLabel(0,pts_prop,prob))
# print(labels)
final_labels = nms(labels,.1)
TLps = []
if len(final_labels):
final_labels.sort(key=lambda x: x.prob(), reverse=True)
for i,label in enumerate(final_labels):
t_ptsh = getRectPts(0,0,out_size[0],out_size[1])
ptsh = np.concatenate((label.pts*getWH(Iorig.shape).reshape((2,1)),np.ones((1,4))))
H = find_T_matrix(ptsh,t_ptsh)
Ilp = cv.warpPerspective(Iorig,H,out_size,borderValue=.0)
# cv.imshow("frame", Iorig)
# cv.waitKey(0)
TLps.append(Ilp)
return final_labels,TLps
def im2single(I):
assert(I.dtype == 'uint8')
return I.astype('float32')/255.
def getWH(shape):
return np.array(shape[1::-1]).astype(float)
def detect_lp(model,I,max_dim,net_step,out_size,threshold):
min_dim_img = min(I.shape[:2])
factor = float(max_dim)/min_dim_img
# print I.shape[:2]
w,h = (np.array(I.shape[1::-1],dtype=float)*factor).astype(int).tolist()
w += (w%net_step!=0)*(net_step - w%net_step)
h += (h%net_step!=0)*(net_step - h%net_step)
# print w
# print h
Iresized = cv.resize(I,(w,h))
T = Iresized.copy()
T = T.reshape((1,T.shape[0],T.shape[1],T.shape[2]))
start = time.time()
Yr = model.predict(T)
Yr = np.squeeze(Yr)
elapsed = time.time() - start
# print(Yr)
L,TLps = reconstruct(I,Iresized,Yr,out_size,threshold)
return L,TLps,elapsed
###Output
_____no_output_____
###Markdown
2.3 Detect license plates Let's define a `lp_detection` function which takes vehicle image as input and return `Llps` list of bounding boxs around detected license plates and `Ilps` list of croppsed images of detected license plates.Again, we use `postprocess` utility function to remove bouding box with low threshold.
###Code
# Import necessary modules
from keras.models import model_from_json
# Initialize the parameters
lp_threshold = .6
wpod_lp_weights_path = 'lp-detector/wpod-net_update1.h5'
wpod_lp_json_path = 'lp-detector/wpod-net_update1.json'
# Load the model
with open(wpod_lp_json_path,'r') as json_file:
wpod_json = json_file.read()
lp_net = model_from_json(wpod_json)
lp_net.load_weights(wpod_lp_weights_path)
def lp_detection(vehicle_img):
ratio = float(max(vehicle_img.shape[:2]))/min(vehicle_img.shape[:2])
side = int(ratio * 288.)
bound_dim = min(side + (side % (2**4) ), 608)
Llps, LlpImgs, elapsed = detect_lp(lp_net,im2single(vehicle_img),bound_dim,2**4,(240,80),lp_threshold)
Ilps = []
for LlpImg in LlpImgs:
Ilp = LlpImg * 255.
Ilps.append(Ilp.astype(np.uint8))
return Llps, Ilps, elapsed
###Output
Using TensorFlow backend.
###Markdown
3. Optical Character Recognition OCR is third and last step in ALPR system. In this step, for each detected license plate we apply OCR for a) character segmentation and b) character recognition.We will use tesseract OCR engine for recognizing and converting text of license plate into machine-encoded string. 3.1 Install tesseract [Tesseract](https://github.com/tesseract-ocr/tesseract) is an optical character recognition (OCR) engine. That is, it will recognize and “read” the text embedded in images.
###Code
!sudo apt-get install tesseract-ocr
!pip install pytesseract
###Output
_____no_output_____
###Markdown
3.2 Recognize characters Let's define a `lp_ocr` function which takes image of license plate as input, apply tesseract OCR engine, converts license plate text into string and return that string as output.
###Code
# Import necessary modules
import cv2 as cv
import pytesseract
def lp_ocr(lp_img):
gray_lp_img = cv.cvtColor(lp_img, cv.COLOR_BGR2GRAY)
lp_str = pytesseract.image_to_string(gray_lp_img, config=("-l eng --oem 1 --psm 13"))
return lp_str
###Output
_____no_output_____
###Markdown
4. Inference We have already defined 1) `vehicle_detection` 2) `lp_detection` and 3) `lp_ocr` to perform vehicle detection, license plate detection and OCR respectively. Let's implement our ALPR systen which is compised all these three function in a sequential pipeline. 4.1 Download the test image
###Code
!wget "https://raw.githubusercontent.com/sergiomsilva/alpr-unconstrained/master/samples/test/03066.jpg" -O test_img.jpg
###Output
_____no_output_____
###Markdown
4.2 Utility functions Here, we define few utility functions like,- `draw_label`: to draw bounding box using object of `Label` class as input- `draw_losangle`: to draw bouding box using a set of four corner points.- `write2img`: to write text on a input image around given bounding box.
###Code
#@title
import numpy as np
import cv2 as cv
def draw_label(I,l,color=(255,0,0),thickness=1):
wh = np.array(I.shape[1::-1]).astype(float)
tl = tuple((l.tl()*wh).astype(int).tolist())
br = tuple((l.br()*wh).astype(int).tolist())
cv.rectangle(I,tl,br,color,thickness=thickness)
def draw_losangle(I,pts,color=(1.,1.,1.),thickness=1):
assert(pts.shape[0] == 2 and pts.shape[1] == 4)
for i in range(4):
pt1 = tuple(pts[:,i].astype(int).tolist())
pt2 = tuple(pts[:,(i+1)%4].astype(int).tolist())
cv.line(I,pt1,pt2,color,thickness)
def write2img(Img,label,strg,txt_color=(0,0,0),bg_color=(255,255,255),font_size=1):
wh_img = np.array(Img.shape[1::-1])
font = cv.FONT_HERSHEY_SIMPLEX
wh_text,v = cv.getTextSize(strg, font, font_size, 3)
bl_corner = label.tl()*wh_img
tl_corner = np.array([bl_corner[0],bl_corner[1]-wh_text[1]])/wh_img
br_corner = np.array([bl_corner[0]+wh_text[0],bl_corner[1]])/wh_img
bl_corner /= wh_img
if (tl_corner < 0.).any():
delta = 0. - np.minimum(tl_corner,0.)
elif (br_corner > 1.).any():
delta = 1. - np.maximum(br_corner,1.)
else:
delta = 0.
tl_corner += delta
br_corner += delta
bl_corner += delta
tpl = lambda x: tuple((x*wh_img).astype(int).tolist())
cv.rectangle(Img, tpl(tl_corner), tpl(br_corner), bg_color, -1)
cv.putText(Img,strg,tpl(bl_corner),font,font_size,txt_color,3)
###Output
_____no_output_____
###Markdown
4.3 Infer on the test image To infer on the test image we apply- first, `vehicle_detection` to detect all vehicles in an input test image. Output of this step is `Icars` a list of cropped vehicles regions as well as `Lcars` a list of bounding boxs around detected vehicles.- second, `lp_detection` on each cropped vehicle regions in `Icars` to detect license plates. Output of this step is `Llps` a list of bouding boxs around detected license plate as well as `Ilps` cropped images of license plate in given vehicle image.- third, `lo_ocr` on each cropped license plate region in `Ilps` to convert licese plat text in it to a string.- finally, `write2img` to write recognized license plat charactes on the input test image.
###Code
# Import necessary modules
import numpy as np
import cv2 as cv
# read test image
test_img = cv.imread('test_img.jpg')
# detect cars
Icars, Lcars = vehicle_detection(test_img)
print('# vehicle detected: {}'.format(len(Icars)))
# for each detected car in test image
for Icar, Lcar in zip(Icars, Lcars):
# draw car bounding box on test image
draw_label(test_img,Lcar,color=(0,255,255),thickness=3)
# detect LP in detected car
Llps, Ilps, elapsed = lp_detection(Icar)
# for each detected LP in detected car image
for Llp, Ilp in zip(Llps, Ilps):
# draw LP bounding box on test image
pts = Llp.pts*Lcar.wh().reshape(2,1) + Lcar.tl().reshape(2,1)
ptspx = pts*np.array(test_img.shape[1::-1],dtype=float).reshape(2,1)
draw_losangle(test_img,ptspx,color=(0,0,255),thickness=3)
# Recognize characters
lp_str = lp_ocr(Ilp)
# write text on test image
llp = Label(0,tl=pts.min(1),br=pts.max(1))
write2img(test_img,llp,lp_str)
###Output
# vehicle detected: 3
###Markdown
4.4 Display inference
###Code
# Import necessary modules
import matplotlib
import matplotlib.pyplot as plt
# Display inference
fig=plt.figure(figsize=(10, 10))
plt.imshow(test_img[:,:,::-1])
plt.show()
###Output
_____no_output_____ |
experiments/subwords_tokenization/cDNA_subwords_vocab2048.ipynb | ###Markdown
If you would like more explanation to Data preparation part, please go to cDNA_subwords_vocab64 notebook. Setup
###Code
!pip show fastai
!pip show biopython
from Bio import SeqIO
from fastai.text.all import *
###Output
_____no_output_____
###Markdown
Data preparation
###Code
!wget -nc http://ftp.ensembl.org/pub/release-103/fasta/homo_sapiens/cdna/Homo_sapiens.GRCh38.cdna.abinitio.fa.gz
!yes n | gunzip Homo_sapiens.GRCh38.cdna.abinitio.fa.gz
###Output
File ‘Homo_sapiens.GRCh38.cdna.abinitio.fa.gz’ already there; not retrieving.
gzip: Homo_sapiens.GRCh38.cdna.abinitio.fa already exists; not overwritten
yes: standard output: Broken pipe
###Markdown
Token preparation
###Code
with open("Homo_sapiens.GRCh38.cdna.abinitio.fa", "rt") as handle:
txts = L(str(record.seq).lower() for record in SeqIO.parse(handle, "fasta"))
txt = txts[0]
txts = txts[1:10001]
SPECIAL_TOKENS = 2
ALPHABET = 4
VOCAB_SIZE = 2048 + SPECIAL_TOKENS + ALPHABET
tokenizer = SubwordTokenizer(vocab_sz=VOCAB_SIZE, special_toks=[], cache_dir='tmp/vocab2048', lang='dna')
tokenizer.setup(txts)
toks = first(tokenizer([txt]))
print(coll_repr(toks, 30))
txt[:100]
tkn = Tokenizer(tokenizer, rules=[], sep='')
print(coll_repr(tkn(txt), 30))
toks_all = txts.map(tkn)
###Output
_____no_output_____
###Markdown
Tokens analysis
###Code
from operator import add
tokens = reduce(add, toks_all)
###Output
_____no_output_____
###Markdown
Top 10 most common tokens
###Code
import collections
elements_count = collections.Counter(tokens)
print(elements_count.most_common(10))
###Output
[('tga', 16335), ('tag', 9722), ('taa', 9384), ('ctga', 8428), ('atga', 7842), ('ag', 7732), ('ttga', 6836), ('tgtga', 6109), ('cgg', 5413), ('cccag', 5359)]
###Markdown
Eight most common tokens correnspond to stop codons. Distribution of occurences
###Code
counts = []
for count in elements_count:
counts.append(elements_count[count])
from matplotlib import pyplot
pyplot.figure(figsize=(10,4))
pyplot.hist(counts, bins=range(min(counts), max(counts) + 500, 500))
pyplot.xlabel("occurence count")
pyplot.ylabel("distinct subword count")
pyplot.show()
###Output
_____no_output_____
###Markdown
Distribution of lengths of subwords
###Code
lengths = []
vocab = set(tokens)
for token in vocab:
lengths.append(len(token))
from matplotlib import pyplot
pyplot.figure(figsize=(4,6))
pyplot.hist(lengths, bins=range(min(lengths), max(lengths) + 1, 1))
pyplot.xlabel("length")
pyplot.ylabel("count")
pyplot.show()
###Output
_____no_output_____
###Markdown
Longest words in vocabulary
###Code
vocab = list(vocab)
vocab.sort(key=len)
print(vocab[-1])
print(vocab[-2])
print(vocab[-3])
###Output
gaagaagaagaagaag
aatggaatcgaatgga
aatggaatggaatgga
###Markdown
Longest subwords are repeats. And two of them differ only in one letter. Least freaquent subwords in training data
###Code
print(elements_count.most_common()[-1])
print(elements_count.most_common()[-2])
print(elements_count.most_common()[-3])
print(elements_count.most_common()[-4])
###Output
('t', 86)
('c', 116)
('a', 142)
('g', 148)
|
band_pass_filter/Pynq/afsk-demodulator-pynq.ipynb | ###Markdown
AFSK Demodulator Step 2: Band-Pass FIR FilterThis is a Pynq portion of the AFSK demodulator project. We will be using the FPGA overlay that we created in Vivado.At this point we have created the bitstream for "project_02" and copied the bitstream, TCL wrapper, and hardware hand-off file to the Pynq board.Let's first verify that we can load the module.
###Code
from pynq import Overlay, Xlnk
import numpy as np
import pynq.lib.dma
overlay = Overlay('project_02.bit')
dma = overlay.bpfilter.bpf_dma
###Output
_____no_output_____
###Markdown
Accellerating FIR FilterBelow is the implementation of the AFSK demodulator in Python. We are going to remove the band pass filter code and replace it with new code.
###Code
import sys
sys.path.append('../../base')
import numpy as np
from scipy.signal import lfiltic, lfilter, firwin
from scipy.io.wavfile import read
from DigitalPLL import DigitalPLL
from HDLC import HDLC
from AX25 import AX25
import time
from pynq import Overlay, Xlnk
import numpy as np
block_size = 1024*1024
xlnk = Xlnk()
out_buffer = xlnk.cma_array(shape=(block_size,), dtype=np.int16)
in_buffer = xlnk.cma_array(shape=(block_size,), dtype=np.int16)
def bpf(data):
start_time = time.time()
output = np.array([],dtype=np.int16)
for i in range(0, len(data), block_size):
print(i)
out_buffer[:len(data[i:i+block_size])] = data[i:i+block_size]
dma.sendchannel.transfer(out_buffer)
dma.recvchannel.transfer(in_buffer)
dma.sendchannel.wait()
dma.recvchannel.wait()
output = np.append(output, in_buffer)
stop_time = time.time()
sw_exec_time = stop_time - start_time
print('Hardware FIR execution time: ',sw_exec_time)
return output
class fir_filter(object):
def __init__(self, coeffs):
self.coeffs = coeffs
self.zl = lfiltic(self.coeffs, 32768.0, [], [])
def __call__(self, data):
result, self.zl = lfilter(self.coeffs, 32768.0, data, -1, self.zl)
return result
class NRZI:
def __init__(self):
self.state = False
def __call__(self, x):
result = (x == self.state)
self.state = x
return result
audio_file = read('../../base/TNC_Test_Ver-1.102-26400-1sec.wav')
sample_rate = audio_file[0]
audio_data = audio_file[1]
print(len(audio_data))
delay = 12 # ~446us
lpf_coeffs = np.array(firwin(101, [1200.0/(sample_rate/2)], width = None,
pass_zero = True, scale = True, window='hann') * 32768, dtype=int)
lpf = fir_filter(lpf_coeffs)
filter_delay = 64 + 50
# Band-pass filter the audio data
f = bpf(np.append(audio_data, np.zeros(filter_delay, dtype=int)))
# Digitize the data
print("Digitizing audio data...")
print(len(f))
d = np.greater_equal(f, 0, dtype=int)
print(len(d))
# Delay the data
print("Delay...")
a = d[delay:]
# XOR the digitized data with the delayed version
print("Doing Logical XOR...")
x = np.logical_xor(d[:0-delay], a, dtype=int)
# Low-pass filter the PWM signal
print("Doing LPF...")
c = lpf(x-0.5)
# Digitize the tone transistions
print(len(c))
print("Digitizing correlator output...")
dx = np.greater_equal(c, 0.0)
print(len(dx))
# Create the PLL
pll = DigitalPLL(sample_rate, 1200.0)
locked = np.zeros(len(dx), dtype=int)
sample = np.zeros(len(dx), dtype=int)
# Clock recovery
print("Doing clock recovery...")
for i in range(len(dx)):
sample[i] = pll(dx[i])
locked[i] = pll.locked()
nrzi = NRZI()
print("Doing NRZI...")
data = [int(nrzi(x)) for x,y in zip(dx, sample) if y]
hdlc = HDLC()
print("Doing HDLC")
count = 0
for b,s,l in zip(dx, sample, locked):
if s:
packet = hdlc(nrzi(b), l)
if packet is not None:
count += 1
print(count, AX25(packet[1]))
ctrl = dma.recvchannel._mmio.read(dma.recvchannel._offset)
print(ctrl)
dma.recvchannel._mmio.write(dma.recvchannel._offset, (ctrl | 4) & 0xFFFFFFFE)
print(dma.recvchannel._mmio.read(dma.recvchannel._offset+0x04))
dma.recvchannel.start()
dma.sendchannel.start()
xlnk.xlnk_reset()
###Output
_____no_output_____ |
examples/ch04/snippets_ipynb/04_08.ipynb | ###Markdown
4.8 Using IPython Tab Completion for Discovery
###Code
import math
# press <Tab> after ma
ma
###Output
_____no_output_____
###Markdown
Viewing Identifiers in a Module
###Code
# press <Tab> after the dot
math.
###Output
_____no_output_____
###Markdown
Using the Currently Highlighted Function
###Code
math.fabs?
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____ |
.ipynb_checkpoints/Trabajo Final Econometria-checkpoint.ipynb | ###Markdown
Convergencia salarial entre gerentes y subordinados en el mercado laboral argentino: una aproximación cuantitativa. Apéndice**Importamos las librerías necesarias**
###Code
import pandas as pd
import numpy as np
import statsmodels as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
**Creamos un DataFrame en Pandas con los datos de la EPH**
###Code
# para que lo descargue automáticamente desde mi dropbox
#df = pd.read_excel('https://www.dropbox.com/s/ai7ejpw2b01ql3c/usu_individual_t217.xls?dl=1')
df = pd.read_excel('C:/Users/franc/Dropbox/Econometria/EPH/usu_individual_t217.xls')
###Output
_____no_output_____
###Markdown
Mantenemos sólo las variables que podrían ser relevantes para el estudio
###Code
df = df.filter(['REGION', 'AGLOMERADO', 'PONDERA', 'ITF', 'IPCF', 'PONDIH',
'CH04', 'CH06', 'CH08', 'CH11', 'CH12', 'CH13', 'CH14',
'NIVEL_ED', 'ESTADO', 'PP3E_TOT', 'PP3F_TOT', 'PP04A',
'PP05B2_MES', 'PP05B2_ANO', 'PP08D1', 'P21', 'PONDIIO',
'Tot_p12', 'p47T', 'PONDII', 'PP04D_COD'])
###Output
_____no_output_____
###Markdown
Necesitamos crear una variable continua que describa la cantidad de años de educación de los encuestadosPara ello vamos a crear una variable llamada EDUC, a la que le asignaremos 6 años para primario completo, 12 años para secundario completo, 14 para terciario completo y 16 para universitario completo, etc.**Vamos a suponer** que la escala de estudios está determinada de la siguiente forma:|Nivel|Completo|Años est.||-----|--------|---------||Preescolar|No|0||Preescolar|Sí|0||Primario|No|CH14||Primario|Sí|6||EGB|No|CH14||EGB|Si|9||Secundario|No|CH14+7||Secundario|Si|12||Polimodal|No|CH14+9||Polimodal|Si|12||Terciario|No|CH14+12[1]||Terciario|Si|14||Universitario|No|CH14+12||Universitario|Si|17||Posgrado|No|CH14+17||Posgrado|Si|20|[1]Hay que revisar a los que tienen valores de terciarios demasiado elevados.Vamos a descartar a quienes recibieron educación especial y a los que responden NS/NR en CH13 y CH14
###Code
# Creamos la variable EDUC
df['EDUC'] = 0
# Eliminamos los que no corresponden
# Educación especial
df = df.drop(df[df.CH12==9].index)
# Ns./Nr. si finalizó el nivel
df = df.drop(df[df.CH13==9].index)
# Educación especial
df = df.drop(df[df.CH14==98].index)
# Ns./Nr. cuál fue el último nivel aprobado
df = df.drop(df[df.CH14==99].index)
# Reemplazamos los NaN's de CH14 por 0's
df['CH14'].fillna(0, inplace=True)
# Quitamos a los menores de 10 años
df = df.drop(df[df.ESTADO==4].index)
###Output
_____no_output_____
###Markdown
Ahora asignamos la cantidad de años correspondientes acorde a la tabla previamente acordada
###Code
# preescolar incompleto y completa
df.loc[(df.CH12==1) & (df.CH13==2), 'EDUC'] = 0
df.loc[(df.CH12==1) & (df.CH13==1), 'EDUC'] = 0
# primaria incompleta y completa
df.loc[(df.CH12==2) & (df.CH13==2), 'EDUC'] = df['CH14']
df.loc[(df.CH12==2) & (df.CH13==1), 'EDUC'] = 6
# egb incompleto y completo
df.loc[(df.CH12==3) & (df.CH13==2), 'EDUC'] = df['CH14']
df.loc[(df.CH12==3) & (df.CH13==1), 'EDUC'] = 9
# secundario incompleto y completo
df.loc[(df.CH12==4) & (df.CH13==2), 'EDUC'] = df['CH14'] + 6
df.loc[(df.CH12==4) & (df.CH13==1), 'EDUC'] = 12
# polimodal incompleto y completo
df.loc[(df.CH12==5) & (df.CH13==2), 'EDUC'] = df['CH14'] + 9
df.loc[(df.CH12==5) & (df.CH13==1), 'EDUC'] = 12
# terciario incompleto y completo
df.loc[(df.CH12==6) & (df.CH13==2), 'EDUC'] = df['CH14'] + 12
df.loc[(df.CH12==6) & (df.CH13==1), 'EDUC'] = 14
# universitario incompleto y completo
df.loc[(df.CH12==7) & (df.CH13==2), 'EDUC'] = df['CH14'] + 12
df.loc[(df.CH12==7) & (df.CH13==1), 'EDUC'] = 17
# posgrado incompleto y completo
df.loc[(df.CH12==8) & (df.CH13==2), 'EDUC'] = df['CH14'] + 17
df.loc[(df.CH12==8) & (df.CH13==1), 'EDUC'] = 20
df
###Output
_____no_output_____
###Markdown
ExperienciaConsideramos que la experiencia es igual a $$EXPER = EDAD - EDUC - 6$$
###Code
df['EDAD'] = df['CH06']
df['EXPER'] = df['EDAD']-df['EDUC']-6
###Output
_____no_output_____
###Markdown
Como podemos ver, tenemos casos donde la experiencia es igual a **-6**. Para evitar problemas con los resultados, vamos a eliminar a todos los que tengan experiencia menor a 0.
###Code
df = df.drop(df[df.EXPER<0].index)
###Output
_____no_output_____
###Markdown
Variable de gerentesConstruimos la variable en base al Clasificador Nacional de Ocupaciones. Según el CNO, la variable *'PP04_COD'* está compuesta por:- Primeros 2 dígitos: Carácter ocupacional- 3° dígito: **Jerarquía ocupacional**- 4° dígito: Tecnología ocupacional- 5° dígito: Calificación ocupacionalDentro de la Jerarquía Ocupacional nos encontramos con los siguientes valores:- 0: **Dirección**- 1: Cuenta propia- 2: **Jefes**- 3: Trabajadores asalariadosLas observaciones que nos interesan a nosotros son las que tienen el valor 0 y el 2. Empezamos eliminando las observaciones que no informan jerarquía
###Code
df = df.dropna(subset=['PP04D_COD'])
###Output
_____no_output_____
###Markdown
3Esto disminuye nuestras observaciones a 24.039A continuación, construimos dos variables nuevas:- 'CAT_OCUP' que toma los 2 primeros valores del CNO para conocer el carácter ocupacional- 'JER_OCUP' que toma el tercer dígito a fin de conocer la jerarquía.Convertimos el tipo de datos a entero (integer) para poder usarlo como dummy fácilmente.
###Code
df['CAT_OCUP'] = df['PP04D_COD'].astype(str).str[0:2].astype(float).astype(int)
df['JER_OCUP'] = df['PP04D_COD'].astype(str).str[2:3].astype(float).astype(int)
# Eliminamos los que no tienen datos (JER_OCUP>3) y a los que estan por cuenta propia (JER_OCUP=1)
df = df.drop(df[df.JER_OCUP>3].index)
df = df.drop(df[df.JER_OCUP==1].index)
###Output
_____no_output_____
###Markdown
Variables DummyVamos a generar una variable dummy **'CEO'** que sea 1 para directores (CEO's) y que sea 0 para jefes. Por otra parte, vamos a generar una variable **'JEFES'** que tiene valor 1 para Directores y Jefes y 0 para asalariados
###Code
# variable categorica
df['JER_OCUP'] = df['JER_OCUP'].astype('category')
# Directores (y no directores)
df['CEO'] = 0
df.loc[(df.JER_OCUP==0), 'CEO'] = 1
# Jefes y directores (y blue collars)
df['WC'] = 0
df.loc[(df.JER_OCUP==0), 'WC'] = 1
df.loc[(df.JER_OCUP==2), 'WC'] = 1
# para corroborar la creacion de la dummy
'''df.filter(['JER_OCUP', 'CEO', 'WC'])'''
###Output
_____no_output_____
###Markdown
RegresiónVamos a ejecutar distintas regresiones, jugando con las dummies. Para ello necesitamos primero crear la variable exper2 tal que:$$EXPER2 = EXPER^2$$Y generar la variable LWAGE tal que:$$LWAGE = ln(P21)$$
###Code
df['EXPER2'] = df['EXPER']**2
df['OCUPADO'] = 0
df.loc[(df.ESTADO==1), 'OCUPADO'] = 1
df.loc[(df.ESTADO==2), 'OCUPADO'] = 0
###Output
_____no_output_____
###Markdown
Creamos un nuevo dataframe con quienes tienen ingresos unicamente, es decir, p21>0. También eliminamos a los que tienen menos que 0, ya que se trata de errores de input. Nota: Si bien en el modelo de mincer esto no se hace de manera _directa_, el resultado es el mismo ya que, los logaritmos de 0 seran NaN's y no podran ser utilizados en la regresion. Para evitar mensajes de error en Python, descartamos a quienes no tienen ingresos.
###Code
#df1 = df.loc[df['P21'] > 0]
df = df.drop(df[df.P21<=0].index)
df['LWAGE'] = np.log(df['P21'])
###Output
_____no_output_____
###Markdown
A continuación vamos a conocer los betas de las variables de control de todos los CEOs, JEFEs y Asalariados, excluidos los que están por cuenta propia, a fin de conocer el _baseline_.
###Code
modelo = smf.ols("df['LWAGE'] ~ df['EDUC']+ df['EXPER'] + df['EXPER2']", data=df
)
modelo_reg = modelo.fit()
modelo_reg.summary()
###Output
_____no_output_____
###Markdown
Interpretación del baselineComo podemos observar, si incluimos a todos los Directores, Jefes y Asalariados, obtenemos que:- __Educación__: es altamente significativa $(t=49.5)$, y cada año extra de estudio aumenta el salario en un 8.1%- __Experiencia__: es altamente significativa $(t=29.66)$, y cada año extra de experiencia aumenta el salario en un 4.4%- __Experiencia al cuadrado__: el valor de -0.0006 explica que a cada año extra de experiencia tiene un efecto menor que el anterior. Al mismo tiempo, también implica que es casi lineal la relación. CEO's vs Jefes _(White collars)_Para continuar, descartaremos del DataFrame a los asalariados comunes y utilizaremos la dummy "CEO" previamente creada para captar el efecto que tiene el rango de director sobre el salario.
###Code
df1 = df.drop(df[df.JER_OCUP==3].index) # esto es lo mismo que df1= df[df.WC==1]
modelo2 = smf.ols(
"df1['LWAGE'] ~ df1['EDUC'] + df1['EXPER'] + df1['EXPER2'] + df1['CEO']",
data=df1
)
modelo2_reg = modelo2.fit()
modelo2_reg.summary()
###Output
_____no_output_____
###Markdown
Como podemos observar, el modelo estima que por cada año extra de educación que tienen los CEOs y los Gerentes, el salario de los mismos crece un 7,76%. Del mismo modo, por cada año extra de experiencia, ganan un 4,36% extra. En ambos casos, tenemos un valor $t>|2|$, lo que implica que son variables significativas. Al igual que con el _baseline_, obtenemos que la concavidad de la experiencia es casi nula, es decir, es una relación casi lineal. Respecto del hecho de ser o no CEO obtenemos un resultado interesante: los CEO's ganan, en promedio, un 8,73% menos que los gerentes.Antes de sacar cualquier tipo de conclusión, deberíamos encarar el problema desde distintas aristas. Para ello, lo primero que vamos a hacer es achicar el sample de las observaciones y correr las regresiones de los CEOs y de los Gerentes por separado.**Directivos**
###Code
modelo3 = smf.ols(
"df1[df1.CEO==1]['LWAGE'] ~ df1[df1.CEO==1]['EDUC'] + df1[df1.CEO==1]['EXPER'] + df1[df1.CEO==1]['EXPER2']",
data=df1[df1.CEO==1]
)
modelo3_reg = modelo3.fit()
modelo3_reg.summary()
###Output
_____no_output_____
###Markdown
**Gerentes**
###Code
modelo4 = smf.ols(
"df1[df1.CEO==0]['LWAGE'] ~ df1[df1.CEO==0]['EDUC'] + df1[df1.CEO==0]['EXPER'] + df1[df1.CEO==0]['EXPER2']",
data=df1[df1.CEO==0]
)
modelo4_reg = modelo4.fit()
modelo4_reg.summary()
###Output
_____no_output_____
###Markdown
**Blue Collars (asalariados)**
###Code
modelo1 = smf.ols(
"df[df.JER_OCUP==3]['LWAGE'] ~ df[df.JER_OCUP==3]['EDUC'] + df[df.JER_OCUP==3]['EXPER'] + df[df.JER_OCUP==3]['EXPER2']",
data=df[df.JER_OCUP==3]
)
modelo1_reg = modelo1.fit()
modelo1_reg.summary()
###Output
_____no_output_____
###Markdown
**White Collars vs Blue Collars utilizando Dummy**
###Code
# Corremos la regresión con la dummy WC
modelo_wcd = smf.ols(
"df['LWAGE'] ~ df['EDUC'] + df['EXPER'] + df['EXPER2'] + df['WC']",
data=df
)
reg_wdc = modelo_wcd.fit()
reg_wdc.summary()
###Output
_____no_output_____
###Markdown
**Directivos**
###Code
modelo_wcd2 = smf.ols(
"df[df.WC==1]['LWAGE'] ~ df[df.WC==1]['EDUC'] + df[df.WC==1]['EXPER'] + df[df.WC==1]['EXPER2']",
data=df[df.WC==1]
)
reg_wdc2 = modelo_wcd2.fit()
reg_wdc2.summary()
###Output
_____no_output_____
###Markdown
**Asalariados**
###Code
modelo_wcd3 = smf.ols(
"df[df.WC==0]['LWAGE'] ~ df[df.WC==0]['EDUC'] + df[df.WC==0]['EXPER'] + df[df.WC==0]['EXPER2']",
data=df[df.WC==0]
)
reg_wdc3 = modelo_wcd3.fit()
reg_wdc3.summary()
###Output
_____no_output_____
###Markdown
**Histograma y Distribución de _LWAGE_ para Directivos**
###Code
sb.distplot(
df1[df1.CEO==1]['LWAGE'],
bins=15,
kde_kws={'color': 'g', 'label':'Mean: 9.69 \n Std:0.784'},
axlabel='LWAGE CEO=1'
)
###Output
_____no_output_____
###Markdown
**Histograma y Distribución de _LWAGE_ para Gerentes**
###Code
sb.distplot(
df1[df1.CEO==0]['LWAGE'],
bins=15,
kde_kws={'color': 'R', 'label':'Mean: 9.8 \n Std:0.669'},
axlabel='LWAGE CEO=0'
)
df1[df1.CEO==0]['LWAGE'].describe()
df1[df1.CEO==0]['LWAGE'].describe()
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
|
mwsql-blogpost/mwsql-blogpost.ipynb | ###Markdown
Explore wiki project data faster with `mwsql`The `mwsql` library is the latest addition to [MediaWiki-utilities](https://www.mediawiki.org/wiki/Mediawiki-utilities), a collection of lightweight Python tools for extracting and processing MediaWiki data. It provides a simple interface for downloading, inspecting, and transforming SQL dump files into other more user-friendly formats such as Pandas dataframes or CSV. `mwsql` is available through PyPI and can be installed using `pip`. Why mwsql?Data from Wikimedia projects is open-source licensed and publicly available in a variety of formats, such as:* [Data dumps](https://dumps.wikimedia.org/) in SQL, XML, and HTML format* [Database replicas](https://wikitech.wikimedia.org/wiki/Portal:Data_ServicesWiki_Replicas) thorough Toolforge, PAWS, or Quarry* [API endpoints](https://www.mediawiki.org/wiki/API:REST_API/Reference)While utilities for working with most of these data sources have existed for quite some time, for example `mwapi` and `mwxml`, no such tool existed for SQL dumps. Because of this gap, developing `mwsql` was proposed as a joint [Outreachy](https://www.mediawiki.org/wiki/Outreachy) project between the Research and Technical Engagement teams during the May-August round of 2021. SQL dumpsBefore diving into exploring the different features of `mwsql`, let's take a look at what a raw SQL dump file looks like. ![raw_sql_dump.png](attachment:raw_sql_dump.png) The dump contains information related to the database table structure, as well as the actual table contents (records) in the form of a list of SQL statements. There is also some additional metadata. Database dumps are most often used for backing up a database so that its contents can be restored in the event of data loss. They are not designed to be worked with 'as is', e.g., parsed, filtered or searched. However, having the ability to access data directly from the dumps allows offline processing and lowers the barrier for users with only basic Python knowledge, such as data scientists, researchers, or journalists because the only prerequisite is basic Python knowledge. `mwsql` features`mwsql` main features are:* easily downloading SQL dump files* parsing the database table into a `Dump` object* allowing fast exploration of the table's metadata and contents * transforming the SQL dump into other more convenient data structures and file formatsThe rest of this tutorial will demonstrate each of these features through a concrete example. Use `mwsql` with a Wiki data dumpLet's use the [Simple English Wikipedia](https://simple.wikipedia.org/wiki/Main_Page) database 'simplewiki'. The latest dumps can be found [here](https://dumps.wikimedia.org/simplewiki/). Downloading dump filesIf you access the dump files from a WMF-hosted environment, `mwsql` recognizes this and simply creates a pointer to the public directory where the dump file is found. If you are working on your local machine, the load function first downloads the dump file to your current working directory, and then creates a pointer to it once the download is complete.By default, `mwsql`'s load function assumes that you want the 'latest' version of the dumps, but you can add a specific date as an optional parameter. If you don't use 'latest' , the date format should be 'YYYYMMDD'. Now, let's see this in action.
###Code
!pip install mwsql
from mwsql import load
# Load 'simplewiki-latest-category.sql.gz'
dump_file = load('simplewiki', 'category')
# Load 'simplewiki-20220301-category.sql.gz'
dump_file_older = load('simplewiki', 'category', date='20220301')
###Output
100%|██████████████████████████████████████████████████████| 660k/660k [00:00<00:00, 1.16MiB/s]
100%|██████████████████████████████████████████████████████| 660k/660k [00:00<00:00, 1.22MiB/s]
###Markdown
The example above is from a locally-run Jupyter notebook, so the files were downloaded displaying a progress bar. Once the download is complete, the files can be found in the same directory as the notebook. ![homedir_after_download.png](attachment:homedir_after_download.png) Parsing the SQL dump fileNow that we have access to the files, we can start exploring. Note that we haven't had to worry about where to download the files from, nor whether they're compressed (.gz). If the files weren't compressed, everything would work just the same, as `mwsql` takes care of all the details in the background.
###Code
from mwsql import Dump
# Create a Dump object
dump = Dump.from_file(dump_file)
###Output
_____no_output_____
###Markdown
Exploring the dataIn just two lines of code, we have created a Dump object that contains all the data from the file. Well, not exactly. If we were working with a very large file -- and most of the dump files are indeed very large -- we would quickly run out of memory if we tried ingesting all the file contents at once.In practice, most of the parsing happens only when we explicitly request the rows, or records, of the table. The file contents are processed sequentially, holding only parts of the file in memory. No matter how large the file is, we will not run out of RAM.Let's start exploring the data.
###Code
# Display the first 10 rows in the table
dump.head()
###Output
['cat_id', 'cat_title', 'cat_pages', 'cat_subcats', 'cat_files']
['1', 'Category_needed', '21', '1', '0']
['2', 'Articles_that_need_to_be_wikified', '49', '48', '0']
['4', 'South_Korea', '26', '16', '0']
['6', '1780s', '20', '17', '0']
['7', 'Matter', '25', '3', '0']
['8', '2007', '21', '16', '0']
['10', 'Germany', '50', '24', '0']
['11', '1985', '16', '10', '0']
['13', '"Politics_of"_templates', '5', '0', '0']
['14', '0s', '12', '5', '0']
###Markdown
The `head` method displays the first rows as lists of strings. The very first row contains the column names, followed by the table records. By default, ten rows are shown, but you can choose to display any number by passing it in as an argument:
###Code
dump.head(3)
###Output
['cat_id', 'cat_title', 'cat_pages', 'cat_subcats', 'cat_files']
['1', 'Category_needed', '21', '1', '0']
['2', 'Articles_that_need_to_be_wikified', '49', '48', '0']
['4', 'South_Korea', '26', '16', '0']
###Markdown
We can also get some metadata:
###Code
print(f'database name: {dump.db}')
print(f'table name: {dump.name}')
print(f'primary key: {dump.primary_key}')
print(f'encoding: {dump.encoding}', end='\n\n')
for key, val in dump.sql_dtypes.items():
print(f'{key}: {val}')
###Output
database name: simplewiki
table name: category
primary key: ['cat_id']
encoding: utf-8
cat_id: int(10) unsigned NOT NULL AUTO_INCREMENT
cat_title: varbinary(255) NOT NULL DEFAULT ''
cat_pages: int(11) NOT NULL DEFAULT 0
cat_subcats: int(11) NOT NULL DEFAULT 0
cat_files: int(11) NOT NULL DEFAULT 0
###Markdown
For all available types of metadata, see the `mwsql` [documentation](https://mwsql.readthedocs.io/en/latest/index.html).You may have noticed that the primary key is a list containing only one item. The reason for this is that a SQL table can have more than one primary key.All Wikimedia SQL dumps are encoded using `utf-8`. Unfortunately, some fields can contain non-recognized characters, raising an encoding error when attempting to parse the dump file. If this happens, you many need to try experimenting with different encodings, such as `latin-1` or `ISO 8859-1`. You can learn more about this, and other, known issues in the [documentation](https://mwsql.readthedocs.io/en/latest/readme.htmlknown-issues).While the `head` method printed out all the data as strings, we can also access the rows and request proper Python data types:
###Code
rows = dump.rows(convert_dtypes=True)
for _ in range(5):
print(next(rows))
###Output
[1, 'Category_needed', 21, 1, 0]
[2, 'Articles_that_need_to_be_wikified', 49, 48, 0]
[4, 'South_Korea', 26, 16, 0]
[6, '1780s', 20, 17, 0]
[7, 'Matter', 25, 3, 0]
###Markdown
Neat. However, anyone serious about working with the data from the SQL dumps probably knows that core Python alone won't do it. `mwsql` is not meant to be a substitute for the real deal, but a bridge to whatever tools are in our data science treasure chest. We have two options here: the first is turning the `Dump` object directly into a more fitting data structure, and the second is to convert the `Dump` to CSV, which is a file format that is universally supported by data science and big data libraries and frameworks. As an example of the first option, here is how we could create a Pandas DataFrame, in one line of code:
###Code
import pandas as pd
# create the Pandas Dataframe
category_df = pd.DataFrame(dump.rows(convert_dtypes=True), columns=dump.col_names)
# display first five records
category_df.head()
###Output
_____no_output_____
###Markdown
Let's get some info about the Dataframe:
###Code
category_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 58069 entries, 0 to 58068
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 cat_id 58069 non-null int64
1 cat_title 58069 non-null object
2 cat_pages 58069 non-null int64
3 cat_subcats 58069 non-null int64
4 cat_files 58069 non-null int64
dtypes: int64(4), object(1)
memory usage: 2.2+ MB
###Markdown
We can see that the Dataframe has approximately 58k rows, and that it uses around 2.2 MB of memory. That is of course a lot of Wikipedia categories, but as dataset sizes go, it's on the tiny side. Contrary to mwsql's `Dump` object, Pandas will attempt to fit an entire Dataframe inside your machine's RAM. If you are working with one of the larger files, you will need to keep this in mind.Before deciding how to deal with the dataset, you may want to get an idea of its size:
###Code
print(f'dump size: {dump.size} bytes')
###Output
dump size: 660450 bytes
###Markdown
660450 bytes is approximately 0.66 MB. Keep in mind, however, that this is the size of the *compressed* file. As we saw earlier, the DataFrame was ~2.2 MB, or roughly 4X larger. How to deal with large datasets is ultimately left to the user because `mwsql` makes no assumptions about how the data will be used. As mentioned earlier, it aspires to be a bridge between the messy SQL dump files and the actual tools people use to work with datasets, be it Pandas, Dask, distributed processing frameworks, etc.This brings us to the second option mentioned earlier: the possibility to convert the `Dump` object into a CSV file. Writing to CSVThe `dump` object has a `to_csv` method, which does exactly what its name implies. This method is a thin wrapper around Python's built-in `csv.writer()`, and therefore accepts the same (optional) keyword arguments. Let's see the most basic use case in action, writing to a file that will be created in the current working directory, without any additional arguments:
###Code
dump.to_csv('category.csv')
###Output
_____no_output_____ |
FINAL_TABLES.ipynb | ###Markdown
Import Overall Data
###Code
df_de_soz_de = pd.read_csv('2007_2016_soz_de.csv')
df_de_soz_fr = pd.read_csv('2007_2016_soz_fr.csv')
df_de_straf_de = pd.read_csv('2007_2016_straf_de.csv')
df_de_straf_fr = pd.read_csv('2007_2016_straf_fr.csv')
df_de_ö = pd.read_csv('2007_2016_ör_de.csv')
df_fr_ö = pd.read_csv('2007_2016_ör_fr.csv')
df_de_ziv = pd.read_csv('2007_2016_ziv_de.csv')
df_fr_ziv = pd.read_csv('2007_2016_zivil_fr.csv')
df_overall = df_de_soz_de.merge(df_de_soz_fr, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_de_straf_de, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_de_straf_fr, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_de_ö, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_fr_ö, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_de_ziv, how='left', left_on='Years', right_on='Years')
df_overall = df_overall.merge(df_fr_ziv, how='left', left_on='Years', right_on='Years')
del df_overall['Unnamed: 0_x']
del df_overall['Datetime_x']
del df_overall['Unnamed: 0_y']
del df_overall['Datetime_y']
df_overall.index = df_overall['Years']
del df_overall['Years']
df_overall['Total'] = df_overall.sum(axis=1)
df_overall
df_overall['Total'].plot()
df_overall = df_overall.reset_index()
lm = smf.ols(formula="Years~Total",data=df_overall).fit()
intercept, slope = lm.params
lm.params
df_overall.plot(x="Years",y="Total")
plt.plot(df_overall["Years"],slope*df_overall["Years"]+intercept,"-",color="red")
###Output
_____no_output_____
###Markdown
Import Clerk Data
###Code
df_fr = pd.read_csv('soz_500_fr.csv')
df_de = pd.read_csv('soz_500_de.csv')
df_de_straf = pd.read_csv('straf_500_de.csv')
df_fr_straf = pd.read_csv('straf_500_fr.csv')
#df_de_ö = pd.read_csv('ör_500_de.csv')
#df_fr_ö = pd.read_csv('ör_500_fr.csv')
df_de_ziv = pd.read_csv('ziv_500_de.csv')
df_fr_ziv = pd.read_csv('zivil_500_fr.csv')
###Output
_____no_output_____
###Markdown
Harmonising Column names
###Code
df_fr.columns = [['Schreiberharm', 'ELEM 500 COUNT', 'TOTAL APPEALS', 'ElempCase', 'Gutgeheissen']]
#df_fr_ö.columns = [['Schreiberharm', 'ELEM 500 COUNT', 'TOTAL APPEALS', 'ElempCase', 'Gutgeheissen']]
df_fr_straf.columns = [['Schreiberharm', 'ELEM 500 COUNT', 'TOTAL APPEALS', 'ElempCase', 'Gutgeheissen']]
df_fr_ziv.columns = [['Schreiberharm', 'ELEM 500 COUNT', 'TOTAL APPEALS', 'ElempCase', 'Gutgeheissen']]
###Output
_____no_output_____
###Markdown
Concat the files
###Code
frames = [df_fr, df_de, df_de_straf, df_fr_straf, df_de_ziv, df_fr_ziv]
df = pd.concat(frames)
df.head()
###Output
_____no_output_____
###Markdown
Slope or not?
###Code
lm = smf.ols(formula="ElempCase~Gutgeheissen",data=df).fit()
intercept, slope = lm.params
lm.params
df.plot(kind='scatter', x="Gutgeheissen",y="ElempCase")
plt.plot(df["Gutgeheissen"],slope*df["Gutgeheissen"]+intercept,"-",color="red")
df.to_csv('data_for_statcheck.csv')
###Output
_____no_output_____
###Markdown
Who are there outliers?
###Code
df.info()
df[df['ElempCase'] > 6]
df['ElempCase'].describe()
###Output
_____no_output_____ |
GPS_Panel/ClassifierPanel_KML.ipynb | ###Markdown
Parameters
###Code
path_T = "Los_Loros/TH_02_index_thermal_ir.tif"
ZonaPV = 'Test'
path_kml_panel = 'Los_Loros/KML/Paneles_' + ZonaPV +'.kml'
path_kml_mesa ='Los_Loros/KML/Mesa_' + ZonaPV +'.kml'
path_dict = 'Los_Loros/KML/Mesa_' + ZonaPV + '.pickle'
path_new_dict = 'Los_Loros/KML/Mesa_' + ZonaPV + '_classifier.pickle'
GR_T.raster.data[GR_T.raster.data == -10000] = 0
GR_T = gr.from_file(path_T)
geot_T = GR_T.geot
## Load List in coordinate latitud and longitude ###
with open(path_dict, "rb") as fp:
L_strings_coord = pickle.load(fp)
###Output
_____no_output_____
###Markdown
Load Classifier
###Code
path_dataset = './Classifier/Data_set_2/Data_prueba_0/'
output_recognizer = path_dataset + "model_SVM/recognizer.pickle"
output_label = path_dataset + "model_SVM/le.pickle"
img_width, img_height = 224, 224
base_model = tf.keras.applications.Xception(input_shape=(img_height, img_width, 3), weights='imagenet', include_top=False)
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
base_model = tf.keras.models.Model(inputs=base_model.input, outputs=x)
recognizer_SVM = pickle.loads(open(output_recognizer, "rb").read())
le = pickle.loads(open(output_label, "rb").read())
###Output
_____no_output_____
###Markdown
Classifier each panel
###Code
epsilon = -2
matrix_expand_bounds = [[-epsilon, -epsilon],[+epsilon, -epsilon], [+epsilon, +epsilon], [-epsilon, +epsilon]]
for string_key in L_strings_coord.keys():
print(string_key)
string = L_strings_coord[string_key]
for panel_key in string['panels'].keys():
panel = string['panels'][panel_key]
Points = Utils.gps2pixel(panel['points'], geot_T) + matrix_expand_bounds
if not GR_T.raster.data[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]].size == 0:
Im = np.zeros((img_height, img_width, 3))
Im[:,:,0] = cv2.resize(GR_T.raster.data[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]], (img_width, img_height))
Im[:,:,1] = Im[:,:,0].copy()
Im[:,:,2] = Im[:,:,0].copy()
panel['status'], panel['prob'] = classifier(base_model, recognizer_SVM, le, Im)
else:
print('problem with coords panel: ', string_key, '_', panel_key)
plt.figure(figsize=(6, 6))
plt.imshow(Im.astype(int))
epsilon = 10
matrix_expand_bounds = [[-epsilon, -epsilon],[+epsilon, -epsilon], [+epsilon, +epsilon], [-epsilon, +epsilon]]
panel = string['panels']['2']
Points = Utils.gps2pixel(panel['points'], geot_T) + matrix_expand_bounds
plt.figure(figsize=(6, 6))
plt.imshow(GR_T.raster.data[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]],cmap = 'gray')
###Output
_____no_output_____
###Markdown
Save KML Panels
###Code
kml=simplekml.Kml()
for string_key in L_strings_coord.keys():
string = L_strings_coord[string_key]
points = string['points']
for panel_key in string['panels'].keys():
panel = string['panels'][panel_key]
points = panel['points']
pmt = kml.newpolygon(outerboundaryis = points)
pmt.extendeddata.newdata(name= 'Id integer', value= str(string_key).zfill(3) + '_' + str(panel['id']).zfill(3))
pmt.extendeddata.newdata(name= 'Id panel', value= str(panel['id']).zfill(3))
pmt.extendeddata.newdata(name='Zona PV', value= ZonaPV)
pmt.extendeddata.newdata(name='Cód. Fall', value= 0)
pmt.extendeddata.newdata(name= 'Tipo falla', value= panel['status'])
pmt.extendeddata.newdata(name= 'Mesa', value= string['id'])
pmt.extendeddata.newdata(name= 'T°', value= panel['T'])
kml.save(path_kml_panel)
## Save List in coordinate latitud and longitude ###
with open(path_new_dict, 'wb') as handle:
pickle.dump(L_strings_coord, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Listo')
plt.imshow(GR_T.raster.data[Points[0][1] : Points[2][1], Points[0][0]: Points[2][0]].astype(int), cmap = 'gray')
###Output
_____no_output_____ |
1. GettingStarted.ipynb | ###Markdown
Getting started with Cosmos notebooksIn this notebook, we'll learn how to use Cosmos notebook features. We'll create a database and container, import some sample data in a container in Azure Cosmos DB and run some queries over it. Create new database and containerTo connect to the service, you can use our built-in instance of ```cosmos_client```. This is a ready to use instance of [CosmosClient](https://docs.microsoft.com/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient?view=azure-python) from our Python SDK. It already has the context of this account baked in. We'll use ```cosmos_client``` to create a new database called **RetailDemo** and container called **WebsiteData**.Our dataset will contain events that occurred on the website - e.g. a user viewing an item, adding it to their cart, or purchasing it. We will partition by CartId, which represents the individual cart of each user. This will give us an even distribution of throughput and storage in our container. Learn more about how to [choose a good partition key.](https://docs.microsoft.com/azure/cosmos-db/partition-data)
###Code
import azure.cosmos
from azure.cosmos.partition_key import PartitionKey
database = cosmos_client.create_database_if_not_exists('RetailDemo')
print('Database RetailDemo created')
container = database.create_container_if_not_exists(id='WebsiteData', partition_key=PartitionKey(path='/CartID'))
print('Container WebsiteData created')
###Output
Database RetailDemo created
Container WebsiteData created
###Markdown
Set the default database and container context to the new resourcesWe can use the ```%database {database_id}``` and ```%container {container_id}``` syntax.
###Code
%database RetailDemo
%container WebsiteData
###Output
_____no_output_____
###Markdown
Load in sample JSON data and insert into the container. We'll use the **%%upload** magic function to insert items into the container
###Code
%%upload --databaseName RetailDemo --containerName WebsiteData --url https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData-small.json
###Output
_____no_output_____
###Markdown
The new database and container should show up under the **Data** section. Use the refresh icon after completing the previous cell. Run a query using the built-in Azure Cosmos notebook magic
###Code
%%sql
SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
###Output
_____no_output_____ |
logistic regression project 2.ipynb | ###Markdown
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments.The Conversation AI team, a research initiative founded by Jigsaw and Google (both a part of Alphabet) are working on tools to help improve online conversation. One area of focus is the study of negative online behaviors, like toxic comments (i.e. comments that are rude, disrespectful or otherwise likely to make someone leave a discussion). So far they’ve built a range of publicly available models served through the Perspective API, including toxicity. But the current models still make errors, and they don’t allow users to select which types of toxicity they’re interested in finding (e.g. some platforms may be fine with profanity, but not with other types of toxic content).In this competition, you’re challenged to build a multi-headed model that’s capable of detecting different types of of toxicity like threats, obscenity, insults, and identity-based hate better than Perspective’s current models. You’ll be using a dataset of comments from Wikipedia’s talk page edits. Improvements to the current model will hopefully help online discussion become more productive and respectful.we are provided with a large number of Wikipedia comments which have been labeled by human raters for toxic behavior. The types of toxicity are:---toxic---severe_toxic---obscene---threat---insult---identity_hateYou must create a model which predicts a probability of each type of toxicity for each comment.
###Code
class_names = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
train = pd.read_csv('/Users/harshitadidwania/Desktop/train.csv').fillna(' ')
test = pd.read_csv('/Users/harshitadidwania/Desktop/test.csv').fillna(' ')
train.head()
test.head()
train_text = train['comment_text']
test_text = test['comment_text']
all_text = pd.concat([train_text, test_text])
word_vectorizer = TfidfVectorizer(
sublinear_tf=True,
strip_accents='unicode',
analyzer='word',
token_pattern=r'\w{1,}',
stop_words='english',
ngram_range=(1, 1),
max_features=10000)
word_vectorizer.fit(all_text)
train_word_features = word_vectorizer.transform(train_text)
test_word_features = word_vectorizer.transform(test_text)
train_word_features
###Output
_____no_output_____
###Markdown
char_vectorizer = TfidfVectorizer( sublinear_tf=True, strip_accents='unicode', analyzer='char', stop_words='english', ngram_range=(2, 6), max_features=50000)char_vectorizer.fit(all_text)train_char_features = char_vectorizer.transform(train_text)test_char_features = char_vectorizer.transform(test_text)
###Code
train_features = hstack([train_char_features, train_word_features])
test_features = hstack([test_char_features, test_word_features])
scores = []
submission = pd.DataFrame.from_dict({'id': test['id']})
for class_name in class_names:
train_target = train[class_name]
classifier = LogisticRegression(C=0.1, solver='sag')
cv_score = np.mean(cross_val_score(classifier, train_features, train_target, cv=3, scoring='roc_auc'))
scores.append(cv_score)
print('CV score for class {} is {}'.format(class_name, cv_score))
classifier.fit(train_features, train_target)
submission[class_name] = classifier.predict_proba(test_features)[:, 1]
print('Total CV score is {}'.format(np.mean(scores)))
predictions= classifier.predict(test_features)
submission.to_csv('submission.csv', index=False)
predictions
pd.read_csv('submission.csv')
###Output
_____no_output_____ |
tutorials/constructor_or_non_standard_sequence.ipynb | ###Markdown
Constructor or non standard sequence In previous tutorials it was discussed how to perform calculations with standard NICE scheme, which is reflected by class StandardSequence. But NICE toolbox provides broader opportunities. It is possible, for example, to combine latest covariants with each other at each step in order to get 2^n body order features after n iterations. In previous tutorials model was defined by StandardSequence class, whose initialization method accepts instances of other classes as ThresholdExpansioner or InvariantsPurifier. These blocks can be used by their own to construct custom model.First of all we need to calculate spherical expansion coefficients as in previous tutorials:
###Code
# downloading dataset from https://archive.materialscloud.org/record/2020.110
!wget "https://archive.materialscloud.org/record/file?file_id=b612d8e3-58af-4374-96ba-b3551ac5d2f4&filename=methane.extxyz.gz&record_id=528" -O methane.extxyz.gz
!gunzip -k methane.extxyz.gz
import numpy as np
import ase.io
import tqdm
from nice.blocks import *
from nice.utilities import *
from matplotlib import pyplot as plt
from sklearn.linear_model import BayesianRidge
structures = ase.io.read('methane.extxyz', index='0:1000')
HYPERS = {
'interaction_cutoff': 6.3,
'max_radial': 5,
'max_angular': 5,
'gaussian_sigma_type': 'Constant',
'gaussian_sigma_constant': 0.05,
'cutoff_smooth_width': 0.3,
'radial_basis': 'GTO'
}
all_species = get_all_species(structures)
coefficients = get_spherical_expansion(structures, HYPERS, all_species)
###Output
--2020-10-14 21:04:38-- https://archive.materialscloud.org/record/file?file_id=b612d8e3-58af-4374-96ba-b3551ac5d2f4&filename=methane.extxyz.gz&record_id=528
Resolving archive.materialscloud.org (archive.materialscloud.org)... 148.187.96.41
Connecting to archive.materialscloud.org (archive.materialscloud.org)|148.187.96.41|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://object.cscs.ch/archive/b6/12/d8e3-58af-4374-96ba-b3551ac5d2f4/data?response-content-type=application%2Foctet-stream&response-content-disposition=attachment%3B%20filename%3Dmethane.extxyz.gz&Expires=1602702338&Signature=%2BW5BcV4kYmjkwE01%2FO9%2FEymZiTY%3D&AWSAccessKeyId=ee64314446074ed3ab5f375a522a4893 [following]
--2020-10-14 21:04:38-- https://object.cscs.ch/archive/b6/12/d8e3-58af-4374-96ba-b3551ac5d2f4/data?response-content-type=application%2Foctet-stream&response-content-disposition=attachment%3B%20filename%3Dmethane.extxyz.gz&Expires=1602702338&Signature=%2BW5BcV4kYmjkwE01%2FO9%2FEymZiTY%3D&AWSAccessKeyId=ee64314446074ed3ab5f375a522a4893
Resolving object.cscs.ch (object.cscs.ch)... 148.187.25.200, 148.187.25.202, 148.187.25.201
Connecting to object.cscs.ch (object.cscs.ch)|148.187.25.200|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1218139661 (1.1G) [application/octet-stream]
Saving to: ‘methane.extxyz.gz’
methane.extxyz.gz 100%[===================>] 1.13G 80.7MB/s in 35s
2020-10-14 21:05:13 (33.4 MB/s) - ‘methane.extxyz.gz’ saved [1218139661/1218139661]
###Markdown
up to this point coefficients for each central specie are 4 dimensional numpy array with indexing [environmental index, radial basis/specie index, lambda, m]Let's focus on only H centered environments:
###Code
coefficients = coefficients[1]
print(coefficients.shape)
###Output
(4000, 10, 6, 11)
###Markdown
The first step is to perform initial scaling, as it was discussed in the first tutorial. For this purposes there is class InitialScaler:
###Code
initial_scaler = InitialScaler(mode='signal integral', individually=False)
initial_scaler.fit(coefficients)
coefficients = initial_scaler.transform(coefficients)
###Output
_____no_output_____
###Markdown
If individually is set to False this class requires fitting before transforming the data. Otherwise fitting is not required. Since we are going to track parity of covariants, i. e. keep even and odd features separated, we need to split them at the begining of our calculations:
###Code
data_even_1, data_odd_1 = InitialTransformer().transform(coefficients)
print(type(data_even_1))
print(data_even_1.covariants_.shape)
print("even features sizes: ", data_even_1.actual_sizes_)
print("odd features sizes: ", data_odd_1.actual_sizes_)
###Output
<class 'nice.nice_utilities.Data'>
(4000, 10, 6, 11)
even features sizes: [10, 0, 10, 0, 10, 0]
odd features sizes: [0, 10, 0, 10, 0, 10]
###Markdown
The result is couple of Data instances which was already discussed in the tutorial "Calculating covariants".All spherical expansion coefficients with even l remain constant under reflections, i. e. are even covariants, while all spherical expansion coefficients with odd l changes sign under reflection, i. e. are odd covariants. PCA and purifiers blocks has two versions. One to transform single instance of data of certain parity, and the second is for the same transformation of both. For example:
###Code
pca = IndividualLambdaPCAs(n_components=5) #single parity version
pca.fit(data_even_1)
data_even_1_t = pca.transform(data_even_1)
print(data_even_1_t.actual_sizes_)
pca = IndividualLambdaPCAsBoth() #both version
pca.fit(data_even_1, data_odd_1)
data_even_1_t, data_odd_1_t = pca.transform(data_even_1, data_odd_1)
###Output
[5 0 5 0 5 0]
###Markdown
One common thing among PCA and purifiers blocks is num_to_fit semantics. Each class has num_to_fit argument in the initialization, which by default equals to '10x'. If num_to_fit is string of 'number x' format it would cause corresponding class use no more than number multiplier by number of components in case of pca, or number multiplier by number of coefficients in linear regression in case of purifiers data points. Data points are calculated as all entries of covariants. I. e. for lambda = 3 for example each environment would bring (3 * 2 + 1) data points, since dimensionality of single covariant vector is (2 * lambda + 1). If num_to_fit is int, it would do the same using the provided number as the upper bound for number of datapoints not depending on the actual number of pca components or linear regression coefficients. If total available number of data points is less than the number specified by num_to_fit class would raise warning, that there are not enough data. If num_to_fit is None corresponding block would always use all available data for fitting.This is done because the overall model is very diverse, and different parts of the model requires very different amount of data for good fitting. Thus, it is a good idea to do such restrictions to speed up the process. In case of PCA if n_components specified in the constructor is less than the actual number of features given during the fit step, it would be decreased to actual number of features.But, if number of data points is less than number of components after this possible decreasement (which make it impossible to produce such amount of components) it would raise ValueError with demand to provide more data for fitting. In order to do PCA step in invariants branch there is class InvariantsPCA, which actually differs from sklearn.decomposition.PCA only by num_to_fit semantics:
###Code
pca = InvariantsPCA(num_to_fit='300x')
ar = np.random.rand(400, 10)
pca.fit(ar)
print(pca.transform(ar).shape)
###Output
(400, 10)
###Markdown
For purifiers there are classes CovariantsPurifier, CovariantsPurifierBoth, InvariantsPurifier, and CovariantsIndividualPurifier. Their purpose is to transform data of single parity, both chunks of data, invariants, and single lambda channel respectively.Their fit and transform methods accept list of covariants/invariants of previous body orders along with current body order. For example: (Let's pretend that we have already features of several body orders):
###Code
purifier = CovariantsPurifier(max_take=3)
purifier.fit([data_even_1, data_even_1], data_even_1)
data_even_1_t = purifier.transform([data_even_1, data_even_1], data_even_1)
###Output
_____no_output_____
###Markdown
As it was already mentioned in the first tutorial purifiers can accept arbitrarily sklearn shaped linear regressors, i. e. with fit and predict methods. See tutorial "Custom regressors into purifiers" for example of such custom regressor. In order to do expansion with thresholding euristics it is necessary to get information how important are particular features. One way is to assing .importance_ property in the Data class (setter will be done in the next version of NICE). The other is to pass features through pca, which would automatically asign importances:
###Code
pca = IndividualLambdaPCAsBoth()
pca.fit(data_even_1, data_odd_1)
data_even_1, data_odd_1 = pca.transform(data_even_1, data_odd_1)
###Output
_____no_output_____
###Markdown
ThresholdExpansioner's fit and transform methods accept two even-odd pair of datas. If first pair is of body order v1 and second pair is of body order v2, result would be of body order v1 + v2:
###Code
expansioner = ThresholdExpansioner(num_expand=200)
expansioner.fit(data_even_1, data_odd_1, data_even_1, data_odd_1)
data_even_2, data_odd_2 = expansioner.transform(data_even_1, data_odd_1,\
data_even_1, data_odd_1)
print(data_even_2.actual_sizes_)
print(data_odd_2.actual_sizes_)
###Output
[ 70 69 165 142 176 121]
[ 0 124 112 178 140 150]
###Markdown
The most time during the fitting is consumed for precomputing clebsch-gordan coefficients. Thus, in case of frequent expansioners fitting with same lambda_max, it is a good idea to precompute clebsch-gordan coefficients once, and after that just feed expansioners with them:
###Code
clebsch = nice.clebsch_gordan.ClebschGordan(5) # 5 is lamba max
expansioner = ThresholdExpansioner(num_expand=200)
expansioner.fit(data_even_1,
data_odd_1,
data_even_1,
data_odd_1,
clebsch_gordan=clebsch)
###Output
_____no_output_____
###Markdown
It might be usefull to investigate how actually usefull is thresholding heuristic in practice. For this purpose it is possible to get "raw importances" for output features which are multiplication of importances of input features which were used in Clebsch-Gordan iteration. In other words it is the criterion for selection itself. Let's plot scatter plot which would show how selection criterion correlates with variance of output features for example. We will use invariants for simplicity:
###Code
expansioner = ThresholdExpansioner(num_expand=200, mode='invariants')
expansioner.fit(data_even_1,
data_odd_1,
data_even_1,
data_odd_1,
clebsch_gordan=clebsch)
invariants_even, _ = expansioner.transform(data_even_1, data_odd_1,\
data_even_1, data_odd_1)
print(invariants_even.shape)
variances = np.mean(((invariants_even - np.mean(invariants_even, axis=0))**2),
axis=0)
raw_importances = expansioner.new_even_raw_importances_
plt.plot(np.sqrt(raw_importances), variances, 'o')
plt.xscale('log')
plt.yscale('log')
plt.xlabel('raw importance')
plt.ylabel('variance')
###Output
(4000, 200)
###Markdown
There is some correlation. Thus, tresholding heuristic works. Getters for raw importances might be inserted in next version of NICE. Standard block has the same input to fit and transform methods as TresholdExpansioner if it doesn't contain purifers:
###Code
block = StandardBlock(ThresholdExpansioner(num_expand=200), None,
IndividualLambdaPCAsBoth(n_components=10))
block.fit(data_even_1, data_odd_1, data_even_1, data_odd_1)
data_even_2, data_odd_2, invariants_even = block.transform(data_even_1, data_odd_1,\
data_even_1, data_odd_1)
print(data_even_2.actual_sizes_)
print(invariants_even)
###Output
[10 10 10 10 10 10]
None
###Markdown
In this case invariants branch was None, and thus it returned None for invariants. This behavior is opposite to StandardSequence one, since it always returns invariants. If invariants branch of some block would be None it would return [:, :, 0, 0] part of covariants. If block contains invariants purifier, than old_even_invariants should be specified in fit and transform methods. If block contains covariants purifier, than old_even_covariants and old_odd_covariants should be specified. old_even_invariants should be list of 2 dimensional numpy arrays with previous invariants, old_even_covariants and old_odd_covariants should be lists with Data instances.
###Code
block = StandardBlock(ThresholdExpansioner(num_expand=200),
CovariantsPurifierBoth(max_take=10), None,
ThresholdExpansioner(num_expand=200, mode='invariants'),
InvariantsPurifier(max_take=10), None)
block.fit(
data_even_2,
data_odd_2,
data_even_1,
data_odd_1,
old_even_invariants=[data_even_1.get_invariants()
], # returns [:, :, 0, 0] slice which is invariants
old_even_covariants=[data_even_1],
old_odd_covariants=[data_odd_1])
data_even_3, data_odd_3, invariants_even_3 = block.transform(
data_even_2,
data_odd_2,
data_even_1,
data_odd_1,
old_even_invariants=[data_even_1.get_invariants()
], # returns [:, :, 0, 0] slice which is invariants
old_even_covariants=[data_even_1],
old_odd_covariants=[data_odd_1])
###Output
_____no_output_____
###Markdown
If block contains purifiers, but fit or transform methods are called without providing necessary data it would raise ValueError.One another usefull method is get_intermediate_shapes as in StandardSequence:
###Code
for key, value in block.get_intermediate_shapes().items(
): # it is a dictionary
print(key, value)
###Output
after covariants expansioner [[33, 89, 125, 140, 141, 123], [28, 84, 123, 143, 143, 125]]
after covariants purifier [[33, 89, 125, 140, 141, 123], [28, 84, 123, 143, 143, 125]]
after invariants expansioner 200
after invariants purifier 200
###Markdown
StandardSequence was already discussed in first tutorial "Constructing machine learning potential" Now let's go to 1024 body order!
###Code
data_even_now, data_odd_now = data_even_1, data_odd_1
for _ in tqdm.tqdm(range(10)):
pca = IndividualLambdaPCAsBoth(10)
pca.fit(data_even_now, data_odd_now)
data_even_now, data_odd_now = pca.transform(data_even_now, data_odd_now)
expansioner = ThresholdExpansioner(50)
expansioner.fit(data_even_now,
data_odd_now,
data_even_now,
data_odd_now,
clebsch_gordan=clebsch)
data_even_now, data_odd_now = expansioner.transform(
data_even_now, data_odd_now, data_even_now, data_odd_now)
# very high body order cause numerical instabilities,
# and, thus, there is need to normalize data
for lambd in range(6):
size = data_even_now.actual_sizes_[lambd]
if (size > 0):
even_factor = np.sqrt(
np.mean(data_even_now.covariants_[:, :size, lambd]**2))
if (even_factor > 1e-15): #catch exact zeros
data_even_now.covariants_[:, :size, lambd] /= even_factor
size = data_odd_now.actual_sizes_[lambd]
if (size > 0):
odd_factor = np.sqrt(
np.mean(data_odd_now.covariants_[:, :size, lambd]**2))
if (odd_factor > 1e-15): #catch exact zeros
data_odd_now.covariants_[:, :size, lambd] /= odd_factor
print(data_even_now.covariants_.shape)
print(data_even_now.actual_sizes_)
print(data_odd_now.actual_sizes_)
###Output
(4000, 28, 6, 11)
[ 7 19 25 28 28 25]
[ 8 18 24 26 28 26]
|
Essentials/Exponential weighted moving average.ipynb | ###Markdown
Exponential Weighted Moving Average (EWMA)Pro jednoduchost výpočtu patří exponenciální vážený klouzavý průměr (EWMA) k hojně využívaným nástrojům pro analýzu dat, machine learning, atd. Narozdíl od jiných typů klouzavých průměrů (Simple, Exponential Moving Average, atd.) s větší periodou než 2, nepotřebuje EWMA k výpočtu aktuální hodnoty znát historii předchozích hodnot. Stačí k tomu pouze **fixní váha** `w` (ta určuje jak velká historie se bere v potaz), **hodnota aktuálního prvku** a **předchozí vypočítaná hodnota EWMA**. Obecný výpočet EWMAHodnota EWMA pro každý prvek *EWMA* se vypočítá podle vzorce:$$y_t = wy_{t-1} + (1-w)x_t$$$y_t$ je výsledek EWMA $w$ je definovaná váha, kde $w \in (0,1)$ $y_{t-1}$ je přechozí výsledek EWMA, pokud jde o první prvek, obvykle se inicializuje hodnotou 0, nebo aktuální měřenou hodnotou $x_t$ je hodnota aktuálního prvku Vztah váhy a historie, která má vliv na aktuální výsledekPo matematickém odvození můžeme zjistit, kolik hodnot zpětně má vliv na aktuální výsledek. Vzorec pro výpočet periody $p \approx \frac{1}{1-w}$, který uvádí [Andrew Ng ve výukovém video na youtube](https://www.youtube.com/watch?v=lAq96T8FkTw), bohužel obsahuje nepřesnost, správně by mělo být:$$p \approx \frac{2}{1-w}$$$w$ je definovaná váha, kde $w \in (0,1)$ $p$ je perioda, kde $p \geq 1$ *Perioda nám říká, že další prvky v hlouběji v historii mají zanedbatelný vliv na výsledek EWMA pro aktuální prvek.*Jednoduše z předchozího vzorce lze odvodit **jakou váhu `w` mám zvolit**:$$w \approx 1 - \frac{2}{p}$$$w$ je definovaná váha, kde $w \in (0,1)$ $p$ je perioda, kde $p \geq 1$ Příklad EWMA a cenový grafNejprve ale získám data:
###Code
import datetime
start = datetime.datetime(2018, 1, 1)
end = datetime.datetime(2019, 1, 1)
spy_data = pdr.data.DataReader('SPY', 'yahoo', start, end)
spy_data.drop(['High', 'Low', 'Open', 'Close', 'Volume'], axis=1, inplace=True) # these columns are not needed
spy_data.head(5)
###Output
_____no_output_____
###Markdown
EWMA za posledních 20 prvkůPokud chci vypočítat EWMA s vlivem posledních 20 prvků na výsledek, váhu vypočítám podle výše zmíněného vzorce:
###Code
p = 20
w = 1-(2/p)
w
###Output
_____no_output_____
###Markdown
EWMA na cenovém grafu se obvykle počítá z Close/Last ceny, vzorec bude vypadat takto:$$y_t = wy_{y-1} + (1-w)c_t$$$y_t$ je hodnota EWMA pro aktuální cenu $w$ je definovaná váha, kde $w \in (0,1)$ $y_{t-1}$ je přechozí výsledek EWMA, pokud jde o první hodnotu, obvykle se inicializuje hodnotou 0, nebo aktuální měřenou hodnotou $c_t$ je aktuální hodnota Close ceny Následující kód slouží pouze pro demonstrativní účely použití vzorce.
###Code
spy_data['EWMA_mannualy'] = spy_data['Adj Close']
for i in range(spy_data.shape[0]):
if i==0:
spy_data['EWMA_mannualy'][i] = spy_data['Adj Close'][i]
continue
spy_data['EWMA_mannualy'][i] = w*spy_data['EWMA_mannualy'][i-1] + (1-w)*spy_data['Adj Close'][i]
spy_data.head()
###Output
_____no_output_____
###Markdown
Pandas využívá tzv. smoothing factor namísto váhyJednoduché a optimalizované řešení lze naleznout v knihovně Pandas po použití exponenciálně vážené **rolling** funkce `df.ewm()`. Tahle funkce obsahuje parametr `alpha`, který reprezentuje tzv. *smoothing factor*. Pandas namísto váhy používá tento *smoothing factor* k výpočtu takto (zdroj: [dokumentace k pandas](http://pandas.pydata.org/pandas-docs/stable/user_guide/computation.html)):$$y_t = (1-\alpha)y_{t-1} + \alpha C_t$$$y_t$ je hodnota EWMA pro aktuální cenu $\alpha$ smoothing factor, kde $\alpha \in (0,1)$ $y_{t-1}$ je přechozí výsledek EWMA, pokud jde o první hodnotu, obvykle se inicializuje hodnotou 0, nebo aktuální měřenou hodnotou $C_t$ je aktuální hodnota Close ceny Po odvození lze jednoduše vypočítat **smoothing factor** z váhy:$$\alpha = 1-w$$$w$ je váha, kde $w \in (0,1)$ $\alpha$ smoothing factor, kde $\alpha \in (0,1)$ Nebo **smoothing factor** z **periody**:$$\alpha \approx \frac{2}{p}$$$p$ je perioda, kde $p \geq 1$ $\alpha$ smoothing factor, kde $\alpha \in (0,1)$ *Pozn.: zde se neshodnu s pandas, který periodu označuje jako `span` parametr a v dokumentaci figuruje vzorec $\alpha = \frac{2}{p+1}$. To by mi pak ale nekorespondovalo s Andrew Ng.*
###Code
a = 1-w
a
###Output
_____no_output_____
###Markdown
*Případná nepřesnost může být způsobena přesností čísel s desetinnou čárkou - více zde: https://en.wikipedia.org/wiki/Floating-point_arithmeticAccuracy_problems*
###Code
spy_data['EWMA'] = spy_data['Adj Close'].ewm(alpha=0.1, adjust=False).mean()
spy_data['EWMA_period'] = spy_data['Adj Close'].ewm(span=p, adjust=False).mean()
spy_data.head()
###Output
_____no_output_____
###Markdown
Pro názornost rozdílu mezi jednoduchým klouzavým průměrem (Simple Moving Average) a EWMA uvedu příklad:
###Code
spy_data['SMA'] = spy_data['Adj Close'].rolling(p).mean()
###Output
_____no_output_____
###Markdown
A nakonec si průmery zobrazím v grafu:
###Code
spy_data[['Adj Close', 'EWMA_mannualy', 'EWMA', 'EWMA_period', 'SMA']].plot(figsize=(16,10));
###Output
_____no_output_____ |
cso/4_refs_reasoning/sameas_clusters.ipynb | ###Markdown
Construct owl:sameAs clusters.
###Code
uri_to_cluster = dict()
clusters = []
for _, r in pairs.iterrows():
s = r['s']
o = r['o']
cluster_no = uri_to_cluster.get(s, None)
if cluster_no is None:
cluster_no = uri_to_cluster.get(o, None)
if cluster_no is None:
uri_to_cluster[s] = len(clusters)
uri_to_cluster[o] = len(clusters)
clusters.append({s, o})
else:
uri_to_cluster[s] = cluster_no
uri_to_cluster[o] = cluster_no
clusters[cluster_no].add(s)
clusters[cluster_no].add(o)
###Output
_____no_output_____
###Markdown
Build an RDF graph, while pruning the external URIs (we won't need them later).
###Code
g = rdflib.Graph()
for cluster in clusters:
to_include = [uri for uri in cluster if uri.startswith('https://cso.kmi.open.ac.uk/topics/')]
for i in to_include:
for j in to_include:
if i != j:
g.add((
rdflib.URIRef(i),
OWL.sameAs,
rdflib.URIRef(j)
))
g.serialize(format='ttl', destination='results/sameAs_inferred.ttl')
###Output
_____no_output_____ |
Anomaly Detection Methods Final Work/Tukeys Box plot Method.ipynb | ###Markdown
Tukey’s box plot method- Tukey distinguishes between possible and probable outliers. - 1- A possible outlier is located between the inner and the outer fence, - 2- whereas a probable outlier is located outside the outer fence.![Image](Outputs/Tukeys-Box-plot-method/Tukey’s-box.jpg) First apply on a random grid
###Code
fig, ax = plt.subplots(figsize=(30,10))
ax.plot(df.groupby("grid_square").get_group(5056).index,
df.groupby("grid_square").get_group(5056)['internet_cdr'], color='blue', label = 'Normal')
ax.set_title('Random grid(5056) ', fontsize=20)
plt.legend()
plt.show();
def tukeys_method(df, variable):
#Takes two parameters: dataframe & variable of interest as string
q1 = df[variable].quantile(0.25)
q3 = df[variable].quantile(0.75)
iqr = q3-q1
inner_fence = 1.5*iqr
outer_fence = 3*iqr
#inner fence lower and upper end
inner_fence_le = q1-inner_fence
inner_fence_ue = q3+inner_fence
#outer fence lower and upper end
outer_fence_le = q1-outer_fence
outer_fence_ue = q3+outer_fence
outliers_prob = []
outliers_poss = []
for index, x in enumerate(df[variable]):
if x <= outer_fence_le or x >= outer_fence_ue:
outliers_prob.append(index)
for index, x in enumerate(df[variable]):
if x <= inner_fence_le or x >= inner_fence_ue:
outliers_poss.append(index)
return outliers_prob, outliers_poss
random_grid=df.groupby("grid_square").get_group(5056)
probable_outliers_tm, possible_outliers_tm = tukeys_method(random_grid, "internet_cdr")
print(probable_outliers_tm)
print("*****************************************************************************************")
print(possible_outliers_tm)
len(probable_outliers_tm)
len(possible_outliers_tm)
anomaly = pd.DataFrame(possible_outliers_tm)
anomaly['Anomaly'] = 1
anomaly.set_index(0, inplace=True)
random_grid = pd.concat([random_grid, anomaly], axis=1)
random_grid['Anomaly'] = random_grid['Anomaly'].replace(np.nan, False)
random_grid['Anomaly'] = random_grid['Anomaly'].replace(1.0, True)
random_grid
random_grid['Anomaly'].value_counts()
# fig1 = px.line(random_grid, y="internet_cdr")
# fig1.update_traces(line=dict(color = 'magenta'))
# anomaly = random_grid.loc[random_grid['Anomaly'] == True, ['internet_cdr']]
# fig2 = px.scatter(anomaly,y="internet_cdr")
# fig3 = go.Figure(data=fig1.data + fig2.data)
# fig3.update_layout(title="Random grid(5056) anomalies points for box blot method")
# fig3.show()
fig, ax = plt.subplots(figsize=(30,10))
anomaly = random_grid.loc[random_grid['Anomaly'] == True, ['internet_cdr']]
ax.plot(random_grid.index, random_grid['internet_cdr'], color='blue', label = 'Normal')
ax.scatter(anomaly.index,anomaly['internet_cdr'], color='red', label = 'Anomaly')
ax.set_title('Random grid(5056) anomalies points for box blot method', fontsize=20)
plt.legend()
plt.show();
###Output
_____no_output_____
###Markdown
Second apply on all grids
###Code
df = pd.read_csv('final_data.csv',parse_dates= ["time"])
full_grid = df.groupby("grid_square")
grids = list(full_grid.groups.keys())
grids
data=[]
for grid in grids:
full_grid = df.groupby("grid_square").get_group(grid)
data.append(full_grid)
data
x=len(grids)
x
data_2=pd.DataFrame()
anomalies= pd.DataFrame()
for i in range(x):
probable_outliers_tm, possible_outliers_tm = tukeys_method(data[i], "internet_cdr")
anomaly = pd.DataFrame(possible_outliers_tm)
anomaly['Anomaly'] = 1
anomaly.set_index(0, inplace=True)
data_2 = pd.concat([data[i].reset_index(drop=True), anomaly], axis=1)
print("========== grid number {} done ==========".format(i+1))
print(data_2['Anomaly'].value_counts())
anomalies=anomalies.append(data_2)
print(data_2)
print("===================================================================")
# print(anomalies)
# probable_outliers_tm, possible_outliers_tm = tukeys_method(data[2], "internet_cdr")
# anomaly = pd.DataFrame(possible_outliers_tm)
# anomaly['Anomaly'] = 1
# anomaly.set_index(0, inplace=True)
# x=pd.concat([data[2].reset_index(drop=True), anomaly], axis=1)
# print (x)
# print(x['Anomaly'].value_counts())
# print("========== grid number {} done ==========".format(i+1))
anomalies.reset_index(drop=True,inplace=True)
anomalies['Anomaly'] = anomalies['Anomaly'].replace(np.nan, False)
anomalies['Anomaly'] = anomalies['Anomaly'].replace(1.0, True)
anomalies['Anomaly'].value_counts()
fig, ax = plt.subplots(figsize=(30,10))
ax.plot(df.index, df['internet_cdr'], color='blue', label = 'Normal')
ax.set_title('Total grids', fontsize=20)
plt.show();
fig, ax = plt.subplots(figsize=(30,10))
anomaly = anomalies.loc[anomalies['Anomaly'] == True, ['internet_cdr']]
ax.plot(df.index, df['internet_cdr'], color='blue', label = 'Normal')
ax.scatter(anomaly.index,anomaly['internet_cdr'], color='red', label = 'Anomaly')
ax.set_title('Total grids anomalies points for box blot method', fontsize=20)
plt.legend()
plt.show();
###Output
_____no_output_____ |
1. Neural Networks and Deep Learning/W3 Planar data classification with one hidden layer v4.ipynb | ###Markdown
Planar data classification with one hidden layerWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:**- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment.- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
###Code
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
###Output
_____no_output_____
###Markdown
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
###Code
X, Y = load_planar_dataset()
print("X.shape: ", X.shape)
print("Y.shape: ", Y.shape)
Y
###Output
X.shape: (2, 400)
Y.shape: (1, 400)
###Markdown
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
###Code
# Visualize the data:
plt.scatter(X[0,:], X[1,:], c=Y[0,:], s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____
###Markdown
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
###Code
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = X.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
###Output
The shape of X is: (2, 400)
The shape of Y is: (1, 400)
I have m = 400 training examples!
###Markdown
**Expected Output**: **shape of X** (2, 400) **shape of Y** (1, 400) **m** 400 3 - Simple Logistic RegressionBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
###Code
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
###Output
D:\Anaconda3\envs\tensorflow\lib\site-packages\sklearn\utils\validation.py:724: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
D:\Anaconda3\envs\tensorflow\lib\site-packages\sklearn\model_selection\_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning.
warnings.warn(CV_WARNING, FutureWarning)
###Markdown
You can now plot the decision boundary of these models. Run the code below.
###Code
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
print('Accuracy of Logistic regression classifier on training set: {:.2f}'.format(clf.score(X.T, Y.T)))
###Output
Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
Accuracy of Logistic regression classifier on training set: 0.47
###Markdown
**Expected Output**: **Accuracy** 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.**Here is our model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
###Code
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case() # from testCases_v2 import *
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
###Output
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
###Markdown
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). **n_x** 5 **n_h** 4 **n_y** 2 4.2 - Initialize the model's parameters **Exercise**: Implement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h,n_x)*0.01
b1 = np.zeros((n_h,1))
W2 = np.random.randn(n_y,n_h)*0.01
b2 = np.zeros((n_y,1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 1
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[0.]]
###Markdown
**Expected Output**: **W1** [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01057952 -0.00909008 0.00551454 0.02292208]] **b2** [[ 0.]] 4.3 - The Loop **Question**: Implement `forward_propagation()`.**Instructions**:- Look above at the mathematical representation of your classifier.- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.- You can use the function `np.tanh()`. It is part of the numpy library.- The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1,X)+b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2,A1)+b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1])) # (n_y, m)
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
###Output
0.26281864019752443 0.09199904522700109 -1.3076660128732143 0.21287768171914198
###Markdown
**Expected Output**: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) no need to use a for loop!```(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters): # parameters is actucally not needed here
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2),Y)+ np.multiply(np.log(1-A2),(1-Y)) # element-wise product: np.multiply(), 或 *
cost = -(1/m)* np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
###Output
cost = 0.6930587610394646
###Markdown
**Expected Output**: **cost** 0.693058761... Using the cache computed during forward propagation, you can now implement backward propagation.**Question**: Implement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters W1, b1, W2 and b2
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2-Y
dW2 = (1/m)*np.dot(dZ2,np.transpose(A1))
db2 = (1/m)*np.sum(dZ2,axis=1,keepdims=True)
dZ1 = np.multiply(np.dot(np.transpose(W2),dZ2),1 - np.power(A1, 2))
dW1 = (1/m)*np.dot(dZ1,np.transpose(X))
db1 = (1/m)*np.sum(dZ1,axis=1,keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
###Output
dW1 = [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]]
db1 = [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]]
dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
###Markdown
**Expected output**: **dW1** [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] **db1** [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] **dW2** [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] **db2** [[-0.16655712]] **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
grads -- python dictionary containing your gradients dW1, db1, dW2 and db2
Returns:
parameters -- python dictionary containing your updated parameters W1, b1, W2 and b2
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1-learning_rate*dW1
b1 = b1-learning_rate*db1
W2 = W2-learning_rate*dW2
b2 = b2-learning_rate*db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]
b1 = [[-1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[-3.20136836e-06]]
W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
b2 = [[0.00010457]]
###Markdown
**Expected Output**: **W1** [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] **b1** [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]] **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]] **b2** [[ 0.00010457]] 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() **Question**: Build your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
###Code
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict. updated W1, b1, W2 and b2
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 1
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000218
Cost after iteration 2000: 0.000107
Cost after iteration 3000: 0.000071
Cost after iteration 4000: 0.000053
Cost after iteration 5000: 0.000042
Cost after iteration 6000: 0.000035
Cost after iteration 7000: 0.000030
Cost after iteration 8000: 0.000026
Cost after iteration 9000: 0.000023
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]
b1 = [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]]
W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]
b2 = [[0.20459656]]
###Markdown
**Expected Output**: **cost after iteration 0** 0.692739 $\vdots$ $\vdots$ **W1** [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] **b1** [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] **W2** [[-2.45566237 -3.27042274 2.00784958 3.36773273]] **b2** [[ 0.20459656]] 4.5 Predictions**Question**: Use your model to predict by building predict().Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
###Code
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters. updated W1, b1, W2 and b2
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
threshold=0.5
predictions = (A2 > threshold)
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
print("predictions =", predictions)
print("X_assess =", X_assess)
###Output
predictions mean = 0.6666666666666666
predictions = [[ True False True]]
X_assess = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
###Markdown
**Expected Output**: **predictions mean** 0.666666666667 It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
###Code
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
###Output
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 1
Cost after iteration 0: 0.693048
Cost after iteration 1000: 0.288083
Cost after iteration 2000: 0.254385
Cost after iteration 3000: 0.233864
Cost after iteration 4000: 0.226792
Cost after iteration 5000: 0.222644
Cost after iteration 6000: 0.219731
Cost after iteration 7000: 0.217504
Cost after iteration 8000: 0.219440
Cost after iteration 9000: 0.218553
###Markdown
**Expected Output**: **Cost after iteration 9000** 0.218607
###Code
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
###Output
Accuracy: 90%
###Markdown
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
###Code
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 1
The size of the output layer is: n_y = 1
Accuracy for 1 hidden units: 67.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 2
The size of the output layer is: n_y = 1
Accuracy for 2 hidden units: 67.25 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 3
The size of the output layer is: n_y = 1
Accuracy for 3 hidden units: 90.75 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 1
Accuracy for 4 hidden units: 90.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 5
The size of the output layer is: n_y = 1
Accuracy for 5 hidden units: 91.25 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 20
The size of the output layer is: n_y = 1
Accuracy for 20 hidden units: 90.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 50
The size of the output layer is: n_y = 1
Accuracy for 50 hidden units: 90.75 %
###Markdown
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**:**Note**: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) **You've learnt to:**- Build a complete neural network with a hidden layer- Make a good use of a non-linear unit- Implemented forward propagation and backpropagation, and trained a neural network- See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
###Code
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y[0,:], s=40, cmap=plt.cm.Spectral);
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 1
The size of the output layer is: n_y = 1
Accuracy for 1 hidden units: 86.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 2
The size of the output layer is: n_y = 1
Accuracy for 2 hidden units: 88.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 3
The size of the output layer is: n_y = 1
Accuracy for 3 hidden units: 96.0 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 1
Accuracy for 4 hidden units: 96.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 5
The size of the output layer is: n_y = 1
Accuracy for 5 hidden units: 86.5 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 20
The size of the output layer is: n_y = 1
Accuracy for 20 hidden units: 87.0 %
The size of the input layer is: n_x = 2
The size of the hidden layer is: n_h = 50
The size of the output layer is: n_y = 1
Accuracy for 50 hidden units: 87.0 %
|
3_From_Image_to_ML.ipynb | ###Markdown
Recap: basic computer vision pipeline1. Image -> get feature -> model -> result 1. feature: HOG, LBP, Euclidean distance 2. Model: Regression, Classification (logistic regression) Classification1. Least square 1. Function to determine class: $y=W*V^T = w_1v_1+w_2v_2....$, 1. y is class, and we need to calculate W. 2. $v_{ij}$ is the j'th feature of sample i. 3. N equations for N samples. 2. Instead, solve the partial differential equation: $$\hat{y_i}=W*V^T$$ $$\sum_i(\hat{y_i}-y_i^*)^2=0$$ $$then \; \dfrac{\partial \sum_i(\hat{y_i}-y_i^*)^2=0}{\partial w_j}=0$$ Number of partial differential equation equals to number of $w_j$2. Maximum Likelihood Estimate (MLE)$$Max(L) : L=\prod_{i=1}^{N} P_{i}$$$$P = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left(-\frac{(y^*-\hat{y})^{2}}{2 \sigma^{2}}\right)^{2}$$$$Max(log(L)):\sum_{i=1}^{N} \log \frac{1}{\sqrt{2 \pi} \sigma}-\frac{1}{2 \sigma^{2}} \cdot \sum_{i}^{N}(y^*-\hat{y})^{2}$$$$equavelent \; to \; Min(\frac{1}{2 \sigma^{2}} \cdot \sum_{i}^{N}(y^*-\hat{y})^{2})$$This matches least square Gradient descentThis is an optimization method to calculate W without expclitely solving equations.1. Process 1. initialize $W=W_0$ 2. $y=W_0*V^T$ and calculate loss 3. If loss within a range small enough, output W. If not, update W until converges.2. Loss 1. L2: $$\sum_i(\hat{y_i}-y_i^*)^2$$3. Update rule 1. When variable moves in the direction of negative gradient, function value decreases. 2. Gradient to calculate here : $$\frac{d L}{d w} \cdot \frac{d y}{d x}$$ 3. Update (lr is learning rate, a hyperparameter): $$ w_{k+1}=w_{k}-\frac{\partial L}{\partial w_{k}} \cdot lr $$ Practice: use gradient descent to classify number images
###Code
import torch
def generate_data():
# generage number matrix
image_data=[]
num_0 = torch.tensor(
[[0,0,1,1,0,0],
[0,1,0,0,1,0],
[0,1,0,0,1,0],
[0,1,0,0,1,0],
[0,0,1,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_0)
num_1 = torch.tensor(
[[0,0,0,1,0,0],
[0,0,1,1,0,0],
[0,0,0,1,0,0],
[0,0,0,1,0,0],
[0,0,1,1,1,0],
[0,0,0,0,0,0]])
image_data.append(num_1)
num_2 = torch.tensor(
[[0,0,1,1,0,0],
[0,1,0,0,1,0],
[0,0,0,1,0,0],
[0,0,1,0,0,0],
[0,1,1,1,1,0],
[0,0,0,0,0,0]])
image_data.append(num_2)
num_3 = torch.tensor(
[[0,0,1,1,0,0],
[0,0,0,0,1,0],
[0,0,1,1,0,0],
[0,0,0,0,1,0],
[0,0,1,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_3)
num_4 = torch.tensor(
[
[0,0,0,0,1,0],
[0,0,0,1,1,0],
[0,0,1,0,1,0],
[0,1,1,1,1,1],
[0,0,0,0,1,0],
[0,0,0,0,0,0]])
image_data.append(num_4)
num_5 = torch.tensor(
[
[0,1,1,1,0,0],
[0,1,0,0,0,0],
[0,1,1,1,0,0],
[0,0,0,0,1,0],
[0,1,1,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_5)
num_6 = torch.tensor(
[[0,0,1,1,0,0],
[0,1,0,0,0,0],
[0,1,1,1,0,0],
[0,1,0,0,1,0],
[0,0,1,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_6)
num_7 = torch.tensor(
[
[0,1,1,1,1,0],
[0,0,0,0,1,0],
[0,0,0,1,0,0],
[0,0,0,1,0,0],
[0,0,0,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_7)
num_8 = torch.tensor(
[[0,0,1,1,0,0],
[0,1,0,0,1,0],
[0,0,1,1,0,0],
[0,1,0,0,1,0],
[0,0,1,1,0,0],
[0,0,0,0,0,0]])
image_data.append(num_8)
num_9 = torch.tensor(
[[0,0,1,1,1,0],
[0,1,0,0,1,0],
[0,0,1,1,1,0],
[0,1,0,0,1,0],
[0,0,0,0,1,0],
[0,0,0,0,0,0]])
image_data.append(num_9)
return image_data
def get_feature(x):
"""feature extraction"""
feature = torch.sum(x,0)+torch.sum(x,1)
feature = feature[0:3]
return feature
image_data = generate_data()
print(get_feature(image_data[0]))
###Output
tensor([2, 5, 4])
###Markdown
Calculate gradient$$\dfrac{d(WV^T-y^*)^2}{dV} = 2(WV^T-y^*)*W$$
###Code
import random
import numpy as np
class model:
def __init__(self, size, lr):
self.w = np.array([1 for _ in range(size)])
self.lr = lr
def loss(self,function='L2'):
if function=='L2':
return np.sum((self.y_hat-self.y)**2)
def step(self,x,y):
self.x = np.array(x)
self.y = np.array(y)
self.y_hat = np.dot(self.x,self.w)
gradient = 2*(np.dot(self.w, self.x)-y)*self.w
#print(self.w)
#print(self.x)
#print(self.y)
#print(gradient)
self.w = self.w - gradient*self.lr
def predict(self,x):
return int(np.dot(x,self.w))
labels = [0,1,2,3,4,5,6,7,8,9]
m = model(3,0.001)
for epoch in range(1000):
loss = 0
for i,x in enumerate(image_data):
v = get_feature(x)
y = labels[i]
m.step(v,y)
loss += m.loss()
#print(loss)
print(loss/len(image_data))
m.predict(get_feature(image_data[6]))
###Output
_____no_output_____
###Markdown
Logistic Regression$$y=\frac{1}{1+e^{-w v+b}}$$ Logistic regression on Boston Dataset
###Code
import numpy as np
from sklearn.datasets import load_boston
from sklearn.utils import shuffle, resample
import matplotlib.pyplot as plt
#load data
data = load_boston()
X = data['data']
y = data['target']
print('data keys:', data.keys())
type(data)
from sklearn.utils import Bunch
Bunch?
X.shape
y.shape
#data preprocessing
#normalize
X = (X-np.mean(X, axis=0))/np.std(X,axis=0)
y=y.reshape(y.shape[0],1)
y.shape
def sigmoid(x):
return 1/(1+np.exp(-x))
def ReLU(z):
a = np.where(z<0,0,z)
return a
nums = np.arange(-10,10,step=1)
plt.plot(nums,ReLU(nums))
def MSE_loss(y, y_hat):
loss = np.mean(np.square(y_hat-y))
return loss
def Linear(x,w,b):
return x.dot(w)+b
#initialization
n = X.shape[0]
n_features = X.shape[1]
W = np.random.randn(n_features,1)
b = np.zeros(1)
lr = 1e-2
epoches=10000
###Output
_____no_output_____
###Markdown
$$L = \frac{1}{2N}\sum_{i=1}^{N}(z^{i} - y^{i})^{2} $$$$z^i = \sum_{j=0}^{N}x_j^{(i)}w^{(j)} + b^{(j)}$$$$\frac{\partial L}{\partial w_j} = \frac{1}{N}\sum_{i}^{N}(z^{(i)} - y^{(i)})x_j^{(i)}$$$$\frac{\partial L}{\partial b} = \frac{1}{N}\sum_{i}^{N}(z^{(i)} - y^{(i)})$$
###Code
def gradient(x,z,y):
n = x.shape[0]
grad_w = np.dot(x.T, (z-y))/n
grad_b = np.mean(z-y)
return grad_w, grad_b
#training
losses=[]
for t in range(epoches):
#forward
y_hat = Linear(X,W,b)
loss = MSE_loss(y,y_hat)
losses.append(loss)
#gradient calculation
grad_w, grad_b = gradient(X,y_hat,y)
#weight update
W = W - lr*grad_w
b = b - lr*grad_b
plt.plot(np.arange(len(losses)),losses)
###Output
_____no_output_____
###Markdown
Introduce hidden layer and activation layers
###Code
hidden_size = 10
#initialize hidden layer
W1 = np.random.randn(n_features,hidden_size)
b1 = np.zeros(hidden_size)
W2 = np.random.randn(hidden_size,1) #output is of size 1
b2 = np.zeros(1)
###Output
_____no_output_____
###Markdown
$$z = \sum^{N}(\sum_{j=0}^{N}x_j^{(i)}w1^{(j)} + b1^{(j)})^{(i)}w2^{(i)} + b2^{(i)}$$$$\frac{\partial L}{\partial w2} = \frac{1}{N}\sum_{i}^{N}(z2^{(i)} - y^{(i)})x_j^{(i)}$$$$\frac{\partial L}{\partial b2} = \frac{1}{N}\sum_{i}^{N}(z2^{(i)} - y^{(i)})$$$$\frac{\partial L}{\partial w1} = z\frac{1}{N}\sum_{i}^{N}(z1^{(i)} - y^{(i)})x_j^{(i)}$$$$\frac{\partial L}{\partial b1} = \frac{1}{N}\sum_{i}^{N}(z1^{(i)} - y^{(i)})$$
###Code
#training
losses=[]
for t in range(epoches):
#forward
z1 = Linear(X,W1,b1)
a1 = ReLU(z1)
z2 = Linear(a1,W2,b2)
a2 = ReLU(z2)
y_hat = a2
loss = MSE_loss(y,y_hat)
losses.append(loss)
#backpropagation
grad_y_pred = y_hat -y
grad_W2 = np.dot(a1.T,grad_y_pred)/n
grad_b2 = np.mean(grad_y_pred, axis=0)
grad_a1 = grad_y_pred.dot(W2.T)
#print(W2.shape, grad_y_pred.shape,grad_a1.shape)
grad_a1[z1<0] = 0
grad_W1 = np.dot(X.T,grad_a1)/n
grad_b1 = np.mean(grad_a1,axis=0)
#weight update
W1 = W1 - lr*grad_W1
b1 = b1 - lr*grad_b1
W2 = W2 - lr*grad_W2
b2 = b2 - lr*grad_b2
plt.plot(np.arange(len(losses)),losses)
z2.shape
###Output
_____no_output_____
###Markdown
Logistic regression on IRIS dataset Data exploration
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
dataset = load_iris()
inputs = dataset["data"]
target = dataset["target"]
print(inputs.shape)
print(target.shape)
print(set(target))
values = [np.sum(target==0),np.sum(target==1),np.sum(target==2)]
plt.pie(values,labels=[0,1,2],autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
#only use first two classes
two_class_input = inputs[:100]
two_class_target = target[:100]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(two_class_input,two_class_target,test_size=0.3, random_state=0)
print(x_train.shape, x_test.shape, y_train.shape)
# add one feature to x
x_train = np.concatenate([x_train,np.ones((x_train.shape[0],1))],1)
x_test = np.concatenate([x_test,np.ones((x_test.shape[0],1))],1)
print(x_train.shape, x_test.shape, y_train.shape)
###Output
(70, 5) (30, 5) (70,)
###Markdown
Cross Entropy loss+ Loss function$$L=\frac{1}{m}*\sum_i^m -y_ilog(f(x_i))-(1-y_i)log(1-f(x_i))$$+ Gradient$$ \frac{\partial L}{\partial w} = \frac{1}{m}X^T*(f(x)-y)$$+ Weight update$$ w = w -\alpha\frac{\partial L}{\partial w}$$
###Code
def CrossEntropyLoss(y,yhat):
return np.mean(-y*np.log(yhat)-(1-y)*np.log(1-yhat))
def CrossEnroptyLoss_grad(X,y,yhat):
m = X.size
dif = yhat-y
#print(X.T.shape,dif.shape)
return np.dot(X.T,dif)/m
def forward(x,w):
return sigmoid(np.dot(x,w))
###Output
_____no_output_____
###Markdown
Training
###Code
# weight initialization
w = np.random.normal(scale=0.1, size=(5,))
w[-1]=0
print('Initial weight: ',w)
last_loss=10000
x = x_train
y = y_train
yhat = forward(x,w)
cur_loss=CrossEntropyLoss(y,yhat)
i=0
lr = 0.001
while abs(cur_loss-last_loss)>1.0e-4:
#print(i,last_loss,cur_loss)
last_loss = cur_loss
i+=1
#gradient calculation
grad = CrossEnroptyLoss_grad(x,y,yhat)
#weight update
w += -lr*grad
#forward
yhat = forward(x,w)
cur_loss = CrossEntropyLoss(y,yhat)
if i%100==0:
print("Iteration {}, loss, {:.4f}".format(i,cur_loss))
###Output
Iteration 100, loss, 0.7689
Iteration 200, loss, 0.7264
Iteration 300, loss, 0.6970
Iteration 400, loss, 0.6758
Iteration 500, loss, 0.6598
Iteration 600, loss, 0.6469
Iteration 700, loss, 0.6360
###Markdown
Validate
###Code
x = x_test
y = y_test
y = forward(x,w)
print(y)
np.mean(np.argmax(y,axis=1)==y)
test_pred = sigmoid(np.dot(x_test, w))
pred_test_y = np.array(test_pred>0.5, dtype=np.float32)
acc = np.mean(pred_test_y==y_test)
print("the accary of model is {}".format(acc))
###Output
the accary of model is 1.0
|
notebooks/spectrograms.ipynb | ###Markdown
Load Data
###Code
# Most of the Spectrograms and Inversion are taken from: https://gist.github.com/kastnerkyle/179d6e9a88202ab0a2fe
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype="band")
return b, a
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = lfilter(b, a, data)
return y
def overlap(X, window_size, window_step):
"""
Create an overlapped version of X
Parameters
----------
X : ndarray, shape=(n_samples,)
Input signal to window and overlap
window_size : int
Size of windows to take
window_step : int
Step size between windows
Returns
-------
X_strided : shape=(n_windows, window_size)
2D array of overlapped X
"""
if window_size % 2 != 0:
raise ValueError("Window size must be even!")
# Make sure there are an even number of windows before stridetricks
append = np.zeros((window_size - len(X) % window_size))
X = np.hstack((X, append))
ws = window_size
ss = window_step
a = X
valid = len(a) - ws
nw = (valid) // ss
out = np.ndarray((nw, ws), dtype=a.dtype)
for i in np.arange(nw):
# "slide" the window along the samples
start = i * ss
stop = start + ws
out[i] = a[start:stop]
return out
def stft(
X, fftsize=128, step=65, mean_normalize=True, real=False, compute_onesided=True
):
"""
Compute STFT for 1D real valued input X
"""
if real:
local_fft = np.fft.rfft
cut = -1
else:
local_fft = np.fft.fft
cut = None
if compute_onesided:
cut = fftsize // 2
if mean_normalize:
X -= X.mean()
X = overlap(X, fftsize, step)
size = fftsize
win = 0.54 - 0.46 * np.cos(2 * np.pi * np.arange(size) / (size - 1))
X = X * win[None]
X = local_fft(X)[:, :cut]
return X
def pretty_spectrogram(d, log=True, thresh=5, fft_size=512, step_size=64):
"""
creates a spectrogram
log: take the log of the spectrgram
thresh: threshold minimum power for log spectrogram
"""
specgram = np.abs(
stft(d, fftsize=fft_size, step=step_size, real=False, compute_onesided=True)
)
if log == True:
specgram /= specgram.max() # volume normalize to max 1
specgram = np.log10(specgram) # take log
specgram[
specgram < -thresh
] = -thresh # set anything less than the threshold as the threshold
else:
specgram[
specgram < thresh
] = thresh # set anything less than the threshold as the threshold
return specgram
# Also mostly modified or taken from https://gist.github.com/kastnerkyle/179d6e9a88202ab0a2fe
def invert_pretty_spectrogram(
X_s, log=True, fft_size=512, step_size=512 / 4, n_iter=10
):
if log == True:
X_s = np.power(10, X_s)
X_s = np.concatenate([X_s, X_s[:, ::-1]], axis=1)
X_t = iterate_invert_spectrogram(X_s, fft_size, step_size, n_iter=n_iter)
return X_t
def iterate_invert_spectrogram(X_s, fftsize, step, n_iter=10, verbose=False):
"""
Under MSR-LA License
Based on MATLAB implementation from Spectrogram Inversion Toolbox
References
----------
D. Griffin and J. Lim. Signal estimation from modified
short-time Fourier transform. IEEE Trans. Acoust. Speech
Signal Process., 32(2):236-243, 1984.
Malcolm Slaney, Daniel Naar and Richard F. Lyon. Auditory
Model Inversion for Sound Separation. Proc. IEEE-ICASSP,
Adelaide, 1994, II.77-80.
Xinglei Zhu, G. Beauregard, L. Wyse. Real-Time Signal
Estimation from Modified Short-Time Fourier Transform
Magnitude Spectra. IEEE Transactions on Audio Speech and
Language Processing, 08/2007.
"""
reg = np.max(X_s) / 1e8
X_best = copy.deepcopy(X_s)
for i in range(n_iter):
if verbose:
print("Runnning iter %i" % i)
if i == 0:
X_t = invert_spectrogram(
X_best, step, calculate_offset=True, set_zero_phase=True
)
else:
# Calculate offset was False in the MATLAB version
# but in mine it massively improves the result
# Possible bug in my impl?
X_t = invert_spectrogram(
X_best, step, calculate_offset=True, set_zero_phase=False
)
est = stft(X_t, fftsize=fftsize, step=step, compute_onesided=False)
phase = est / np.maximum(reg, np.abs(est))
X_best = X_s * phase[: len(X_s)]
X_t = invert_spectrogram(X_best, step, calculate_offset=True, set_zero_phase=False)
return np.real(X_t)
def invert_spectrogram(X_s, step, calculate_offset=True, set_zero_phase=True):
"""
Under MSR-LA License
Based on MATLAB implementation from Spectrogram Inversion Toolbox
References
----------
D. Griffin and J. Lim. Signal estimation from modified
short-time Fourier transform. IEEE Trans. Acoust. Speech
Signal Process., 32(2):236-243, 1984.
Malcolm Slaney, Daniel Naar and Richard F. Lyon. Auditory
Model Inversion for Sound Separation. Proc. IEEE-ICASSP,
Adelaide, 1994, II.77-80.
Xinglei Zhu, G. Beauregard, L. Wyse. Real-Time Signal
Estimation from Modified Short-Time Fourier Transform
Magnitude Spectra. IEEE Transactions on Audio Speech and
Language Processing, 08/2007.
"""
size = int(X_s.shape[1] // 2)
wave = np.zeros((X_s.shape[0] * step + size))
# Getting overflow warnings with 32 bit...
wave = wave.astype("float64")
total_windowing_sum = np.zeros((X_s.shape[0] * step + size))
win = 0.54 - 0.46 * np.cos(2 * np.pi * np.arange(size) / (size - 1))
est_start = int(size // 2) - 1
est_end = est_start + size
for i in range(X_s.shape[0]):
wave_start = int(step * i)
wave_end = wave_start + size
if set_zero_phase:
spectral_slice = X_s[i].real + 0j
else:
# already complex
spectral_slice = X_s[i]
# Don't need fftshift due to different impl.
wave_est = np.real(np.fft.ifft(spectral_slice))[::-1]
if calculate_offset and i > 0:
offset_size = size - step
if offset_size <= 0:
print(
"WARNING: Large step size >50\% detected! "
"This code works best with high overlap - try "
"with 75% or greater"
)
offset_size = step
offset = xcorr_offset(
wave[wave_start : wave_start + offset_size],
wave_est[est_start : est_start + offset_size],
)
else:
offset = 0
wave[wave_start:wave_end] += (
win * wave_est[est_start - offset : est_end - offset]
)
total_windowing_sum[wave_start:wave_end] += win
wave = np.real(wave) / (total_windowing_sum + 1e-6)
return wave
def xcorr_offset(x1, x2):
"""
Under MSR-LA License
Based on MATLAB implementation from Spectrogram Inversion Toolbox
References
----------
D. Griffin and J. Lim. Signal estimation from modified
short-time Fourier transform. IEEE Trans. Acoust. Speech
Signal Process., 32(2):236-243, 1984.
Malcolm Slaney, Daniel Naar and Richard F. Lyon. Auditory
Model Inversion for Sound Separation. Proc. IEEE-ICASSP,
Adelaide, 1994, II.77-80.
Xinglei Zhu, G. Beauregard, L. Wyse. Real-Time Signal
Estimation from Modified Short-Time Fourier Transform
Magnitude Spectra. IEEE Transactions on Audio Speech and
Language Processing, 08/2007.
"""
x1 = x1 - x1.mean()
x2 = x2 - x2.mean()
frame_size = len(x2)
half = frame_size // 2
corrs = np.convolve(x1.astype("float32"), x2[::-1].astype("float32"))
corrs[:half] = -1e30
corrs[-half:] = -1e30
offset = corrs.argmax() - len(x1)
return offset
def make_mel(spectrogram, mel_filter, shorten_factor=1):
mel_spec = np.transpose(mel_filter).dot(np.transpose(spectrogram))
mel_spec = scipy.ndimage.zoom(
mel_spec.astype("float32"), [1, 1.0 / shorten_factor]
).astype("float16")
mel_spec = mel_spec[:, 1:-1] # a little hacky but seemingly needed for clipping
return mel_spec
def mel_to_spectrogram(mel_spec, mel_inversion_filter, spec_thresh, shorten_factor):
"""
takes in an mel spectrogram and returns a normal spectrogram for inversion
"""
mel_spec = mel_spec + spec_thresh
uncompressed_spec = np.transpose(np.transpose(mel_spec).dot(mel_inversion_filter))
uncompressed_spec = scipy.ndimage.zoom(
uncompressed_spec.astype("float32"), [1, shorten_factor]
).astype("float16")
uncompressed_spec = uncompressed_spec - 4
return uncompressed_spec
# From https://github.com/jameslyons/python_speech_features
def hz2mel(hz):
"""Convert a value in Hertz to Mels
:param hz: a value in Hz. This can also be a numpy array, conversion proceeds element-wise.
:returns: a value in Mels. If an array was passed in, an identical sized array is returned.
"""
return 2595 * np.log10(1 + hz / 700.0)
def mel2hz(mel):
"""Convert a value in Mels to Hertz
:param mel: a value in Mels. This can also be a numpy array, conversion proceeds element-wise.
:returns: a value in Hertz. If an array was passed in, an identical sized array is returned.
"""
return 700 * (10 ** (mel / 2595.0) - 1)
def get_filterbanks(nfilt=20, nfft=512, samplerate=16000, lowfreq=0, highfreq=None):
"""Compute a Mel-filterbank. The filters are stored in the rows, the columns correspond
to fft bins. The filters are returned as an array of size nfilt * (nfft/2 + 1)
:param nfilt: the number of filters in the filterbank, default 20.
:param nfft: the FFT size. Default is 512.
:param samplerate: the samplerate of the signal we are working with. Affects mel spacing.
:param lowfreq: lowest band edge of mel filters, default 0 Hz
:param highfreq: highest band edge of mel filters, default samplerate/2
:returns: A numpy array of size nfilt * (nfft/2 + 1) containing filterbank. Each row holds 1 filter.
"""
highfreq = highfreq or samplerate / 2
assert highfreq <= samplerate / 2, "highfreq is greater than samplerate/2"
# compute points evenly spaced in mels
lowmel = hz2mel(lowfreq)
highmel = hz2mel(highfreq)
melpoints = np.linspace(lowmel, highmel, nfilt + 2)
# our points are in Hz, but we use fft bins, so we have to convert
# from Hz to fft bin number
bin = np.floor((nfft + 1) * mel2hz(melpoints) / samplerate)
fbank = np.zeros([nfilt, nfft // 2])
for j in range(0, nfilt):
for i in range(int(bin[j]), int(bin[j + 1])):
fbank[j, i] = (i - bin[j]) / (bin[j + 1] - bin[j])
for i in range(int(bin[j + 1]), int(bin[j + 2])):
fbank[j, i] = (bin[j + 2] - i) / (bin[j + 2] - bin[j + 1])
return fbank
def create_mel_filter(
fft_size, n_freq_components=64, start_freq=300, end_freq=8000, samplerate=44100
):
"""
Creates a filter to convolve with the spectrogram to get out mels
"""
mel_inversion_filter = get_filterbanks(
nfilt=n_freq_components,
nfft=fft_size,
samplerate=samplerate,
lowfreq=start_freq,
highfreq=end_freq,
)
# Normalize filter
mel_filter = mel_inversion_filter.T / mel_inversion_filter.sum(axis=1)
return mel_filter, mel_inversion_filter
### Parameters ###
fft_size = 2048 # window size for the FFT
step_size = fft_size // 16 # distance to slide along the window (in time)
spec_thresh = 4 # threshold for spectrograms (lower filters out more noise)
lowcut = 500 # Hz # Low cut for our butter bandpass filter
highcut = 15000 # Hz # High cut for our butter bandpass filter
# For mels
n_mel_freq_components = 64 # number of mel frequency channels
shorten_factor = 10 # how much should we compress the x-axis (time)
start_freq = 300 # Hz # What frequency to start sampling our melS from
end_freq = 8000 # Hz # What frequency to stop sampling our melS from
wav_spectrogram = pretty_spectrogram(
data.astype("float64"),
fft_size=fft_size,
step_size=step_size,
log=True,
thresh=spec_thresh,
)
###Output
_____no_output_____ |
portfolio/fundamentals/003-pets-dataloader.ipynb | ###Markdown
`003-pets-dataloader`Task: create data loaders for the Pets dataset using the mid-level `DataBlocks` API Setup
###Code
# setup fastai if needed
try: import fastbook
except ImportError: import subprocess; subprocess.run(['pip','install','-Uq','fastbook'])
# Import fastai code.
from fastai.vision.all import *
# Set a seed for reproducibility.
set_seed(12345, reproducible=True)
###Output
_____no_output_____
###Markdown
Load the data.
###Code
path = untar_data(URLs.PETS) / "images"
###Output
_____no_output_____
###Markdown
Sort the image filenames so the ordering is consistent across platforms and runs.
###Code
image_files = sorted(get_image_files(path))
###Output
_____no_output_____
###Markdown
Task Create the cat-vs-dog classifier of notebook `000`, but use the mid-level `DataBlocks` API instead of the high-level `ImageDataLoaders`. To do so:1. Create a `splitter` object that will randomly split the images into a training set and a 20% validation set. Use a random seed of 42 to make results reproducible.2. Show the first few indices from the train and valid sets that result from applying `splitter` to the `image_files` declared above.3. Write a `get_y` function that takes an image file `Path` object and returns `cat` or `dog` accordingly.4. Test your `get_y` function by applying it to the first image in the training set. Load the image file (using `PILImage.create`) and check that the label is correct.5. Create a `DataBlock` using the standard `ImageBlock` and `CategoryBlock`, your `get_y` function, and the `splitter`. Have it transform each item by resizing it to 224 pixels square.6. Create a `DataLoaders` by applying your `DataBlock` to the `image_files`.7. Test your `DataLoaders` by showing a batch of images from the validation set. Solution
###Code
# Your code here
###Output
_____no_output_____ |
tutorials/oct_cb_tutorial_10_ofd_viewer_beta.ipynb | ###Markdown
Tutorial 10: Editting the editsettings.ini using the .ofd viewer
###Code
#Import required system libraries for file management
import sys,importlib,os
# Provide path to oct-cbort library
module_path=os.path.abspath('C:\\Users\SPARC_PSOCT_MGH\Documents\GitHub\oct-cbort')
if module_path not in sys.path:
sys.path.append(module_path)
# Import oct-cbort library
from oct.view.ofd import *
###Output
_____no_output_____
###Markdown
Within the view module lies two types of viewers:1) OFDView - For unprocessed, raw .ofd data, serving the purpose of optimizing the editsettings.ini before running the processing over the entire volume. 2) MGHView - A multi-dimensional slicer that uses binary memory mapping to access any view of the outputted data. Here we will look at the first, *.ofd viewer.Both viewers are built on a Napari and MagicGui backbone. https://napari.org/ OFDView
###Code
# Put any directory here
directory = 'G:\\Damon\Damon_Temp_test\[p.D8_9_4_19][s.baseline][09-04-2019_09-07-30]'
# put whatever processing states you want to visualize on the fly
state = 'struct+angio+ps+hsv'
viewer = OFDView(directory, state)
viewer.run()
###Output
Viewing frame: 6
Viewing frame: 430
|
2018-08-10_AV_Innoplexus/04. Model creation.ipynb | ###Markdown
Model creation & selectionNow we can look at what kind of model predicts our classes the best.First I'm going to use TPOT to search the possible options and give me a good starting option.
###Code
import pandas as pd
data_dir = "../data/2018-08-10_AV_Innoplexus/"
train_df = pd.read_csv(data_dir+"train_df_tfidf.csv")
#Encode Tag
tags = train_df['Tag'].unique().tolist()
tags.sort()
tag_dict = {key: value for (key, value) in zip(tags,range(len(tags)))}
tag_dict
train_df['Tag_encoded'] = train_df['Tag'].map(tag_dict)
train_df_encoded = train_df.drop('Tag',axis=1)
from sklearn.model_selection import train_test_split
xcols = [x for x in train_df_encoded if x != 'Tag_encoded']
X_train, X_test, y_train, y_test = train_test_split(
train_df_encoded[xcols], train_df_encoded['Tag_encoded'],
test_size=0.33, random_state=27
)
X_train.iloc[:,:100].head()
###Output
_____no_output_____
###Markdown
To determine which scorer I should use for TPOT, I should look and see if my training data has an imbalance of classes.
###Code
train_df_encoded.groupby('Tag_encoded')[train_df_encoded.columns[0]].nunique()
###Output
_____no_output_____
###Markdown
Because there looks to be somewhat of an imbalance between classes, I will go with f1_micro
###Code
from tpot import TPOTClassifier
import os
print(os.path.exists("./TPOT_gens"))
mingw_path = 'C:\\Program Files\\mingw-w64\\x86_64-8.1.0-posix-seh-rt_v6-rev0\\mingw64\\bin'
os.environ['PATH'] = mingw_path + ';' + os.environ['PATH']
import xgboost
tpot_class = TPOTClassifier(scoring='f1_micro', periodic_checkpoint_folder="./TPOT_gens",
n_jobs=-1, verbosity=3)
###Output
True
30 operators have been imported by TPOT.
###Markdown
TPOT was taking a really, really long time with all 500 features, so I went with the top 100.
###Code
tpot_class.fit(X_train.iloc[:,:100], y_train)
print(tpot_class.score(X_test, y_test))
tpot_class.export('./TPOT_gens/final_tpot_innoplexus_pipeline.py')
###Output
_____no_output_____ |
Step 4/.ipynb_checkpoints/Activity_7_Optimizing_a_deep_learning_model-checkpoint.ipynb | ###Markdown
Activity 7: Optimizing a deep learning modelIn this activity we optimize our deep learning model. We aim to achieve greater performance than our model `bitcoin_lstm_v0`, which is off at about 6.8% from the real Bitcoin prices. We explore the following topics in this notebook:* Experimenting with different layers and the number of nodes* Grid search strategy for epoch and activation functions Load Data
###Code
%autosave 5
# Import necessary libraries
import numpy as np
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('seaborn-white')
from datetime import datetime, timedelta
from keras.models import load_model, Sequential
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Activation, Dropout, ActivityRegularization
from keras.callbacks import TensorBoard
from scripts.utilities_activity7 import (
create_groups, split_lstm_input,
train_model, plot_two_series, rmse,
mape, denormalize)
np.random.seed(0)
# Load datasets
train = pd.read_csv('train_dataset.csv')
test = pd.read_csv('test_dataset.csv')
# Convert `date` column to datetime type
test['date'] = test['date'].apply(
lambda x: datetime.strptime(x, '%Y-%m-%d'))
# Group data into groups containing seven observations
train_data = create_groups(
train['close_point_relative_normalization'][2:].values)
test_data = create_groups(
test['close_point_relative_normalization'][:-3].values)
# Reshape the data in the format expected by the LSTM layer
X_train, Y_train = split_lstm_input(train_data)
###Output
_____no_output_____
###Markdown
Reference Model
###Code
# TASK:
# Load data for `v0` of our model.
# Call this `model_v0`.
model_v0 = load_model('bitcoin_lstm_v0.h5')
%%time
# TASK:
# Train the reference model `model_v0`.
#
model_history = train_model(model=model_v0,
X=X_train, Y=Y_train,
epochs=100,
version=1, run_number=0)
###Output
Epoch 1/100
1/1 [==============================] - 0s 3ms/step - loss: 0.0027
Epoch 2/100
WARNING:tensorflow:From /home/siddhant/.local/lib/python3.8/site-packages/tensorflow/python/ops/summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
1/1 [==============================] - 0s 3ms/step - loss: 0.0023
Epoch 3/100
1/1 [==============================] - 0s 2ms/step - loss: 0.0020
Epoch 4/100
1/1 [==============================] - 0s 2ms/step - loss: 0.0018
Epoch 5/100
1/1 [==============================] - 0s 2ms/step - loss: 0.0016
Epoch 6/100
1/1 [==============================] - 0s 2ms/step - loss: 0.0015
Epoch 7/100
1/1 [==============================] - 0s 3ms/step - loss: 0.0013
Epoch 8/100
1/1 [==============================] - 0s 3ms/step - loss: 0.0012
Epoch 9/100
1/1 [==============================] - 0s 3ms/step - loss: 0.0011
Epoch 10/100
1/1 [==============================] - 0s 2ms/step - loss: 0.0010
Epoch 11/100
1/1 [==============================] - 0s 3ms/step - loss: 9.3836e-04
Epoch 12/100
1/1 [==============================] - 0s 3ms/step - loss: 8.5599e-04
Epoch 13/100
1/1 [==============================] - 0s 3ms/step - loss: 7.7978e-04
Epoch 14/100
1/1 [==============================] - 0s 3ms/step - loss: 7.0919e-04
Epoch 15/100
1/1 [==============================] - 0s 3ms/step - loss: 6.4378e-04
Epoch 16/100
1/1 [==============================] - 0s 2ms/step - loss: 5.8318e-04
Epoch 17/100
1/1 [==============================] - 0s 3ms/step - loss: 5.2708e-04
Epoch 18/100
1/1 [==============================] - 0s 2ms/step - loss: 4.7520e-04
Epoch 19/100
1/1 [==============================] - 0s 3ms/step - loss: 4.2730e-04
Epoch 20/100
1/1 [==============================] - 0s 3ms/step - loss: 3.8314e-04
Epoch 21/100
1/1 [==============================] - 0s 2ms/step - loss: 3.4252e-04
Epoch 22/100
1/1 [==============================] - 0s 2ms/step - loss: 3.0522e-04
Epoch 23/100
1/1 [==============================] - 0s 4ms/step - loss: 2.7106e-04
Epoch 24/100
1/1 [==============================] - 0s 3ms/step - loss: 2.3985e-04
Epoch 25/100
1/1 [==============================] - 0s 2ms/step - loss: 2.1140e-04
Epoch 26/100
1/1 [==============================] - 0s 4ms/step - loss: 1.8554e-04
Epoch 27/100
1/1 [==============================] - 0s 3ms/step - loss: 1.6211e-04
Epoch 28/100
1/1 [==============================] - 0s 3ms/step - loss: 1.4095e-04
Epoch 29/100
1/1 [==============================] - 0s 2ms/step - loss: 1.2189e-04
Epoch 30/100
1/1 [==============================] - 0s 3ms/step - loss: 1.0480e-04
Epoch 31/100
1/1 [==============================] - 0s 2ms/step - loss: 8.9544e-05
Epoch 32/100
1/1 [==============================] - 0s 3ms/step - loss: 7.5981e-05
Epoch 33/100
1/1 [==============================] - 0s 4ms/step - loss: 6.3991e-05
Epoch 34/100
1/1 [==============================] - 0s 2ms/step - loss: 5.3456e-05
Epoch 35/100
1/1 [==============================] - 0s 2ms/step - loss: 4.4263e-05
Epoch 36/100
1/1 [==============================] - 0s 2ms/step - loss: 3.6301e-05
Epoch 37/100
1/1 [==============================] - 0s 2ms/step - loss: 2.9463e-05
Epoch 38/100
1/1 [==============================] - 0s 3ms/step - loss: 2.3648e-05
Epoch 39/100
1/1 [==============================] - 0s 2ms/step - loss: 1.8768e-05
Epoch 40/100
1/1 [==============================] - 0s 2ms/step - loss: 1.4849e-05
Epoch 41/100
1/1 [==============================] - 0s 4ms/step - loss: 1.2779e-05
Epoch 42/100
1/1 [==============================] - 0s 3ms/step - loss: 1.3372e-05
Epoch 43/100
1/1 [==============================] - 0s 3ms/step - loss: 1.7176e-05
Epoch 44/100
1/1 [==============================] - 0s 3ms/step - loss: 1.8914e-05
Epoch 45/100
1/1 [==============================] - 0s 2ms/step - loss: 1.3405e-05
Epoch 46/100
1/1 [==============================] - 0s 2ms/step - loss: 8.2098e-06
Epoch 47/100
1/1 [==============================] - 0s 3ms/step - loss: 4.9156e-06
Epoch 48/100
1/1 [==============================] - 0s 3ms/step - loss: 3.2198e-06
Epoch 49/100
1/1 [==============================] - 0s 3ms/step - loss: 2.2469e-06
Epoch 50/100
1/1 [==============================] - 0s 3ms/step - loss: 1.7154e-06
Epoch 51/100
1/1 [==============================] - 0s 3ms/step - loss: 1.4403e-06
Epoch 52/100
1/1 [==============================] - 0s 4ms/step - loss: 1.4034e-06
Epoch 53/100
1/1 [==============================] - 0s 3ms/step - loss: 1.6093e-06
Epoch 54/100
1/1 [==============================] - 0s 2ms/step - loss: 2.2003e-06
Epoch 55/100
1/1 [==============================] - 0s 4ms/step - loss: 3.3199e-06
Epoch 56/100
1/1 [==============================] - 0s 3ms/step - loss: 5.1905e-06
Epoch 57/100
1/1 [==============================] - 0s 2ms/step - loss: 7.2493e-06
Epoch 58/100
1/1 [==============================] - 0s 3ms/step - loss: 8.6028e-06
Epoch 59/100
1/1 [==============================] - 0s 4ms/step - loss: 7.9663e-06
Epoch 60/100
1/1 [==============================] - 0s 3ms/step - loss: 6.5332e-06
Epoch 61/100
1/1 [==============================] - 0s 3ms/step - loss: 4.8231e-06
Epoch 62/100
1/1 [==============================] - 0s 3ms/step - loss: 3.7093e-06
Epoch 63/100
1/1 [==============================] - 0s 3ms/step - loss: 2.9454e-06
Epoch 64/100
1/1 [==============================] - 0s 4ms/step - loss: 2.6091e-06
Epoch 65/100
1/1 [==============================] - 0s 4ms/step - loss: 2.4934e-06
Epoch 66/100
1/1 [==============================] - 0s 3ms/step - loss: 2.6637e-06
Epoch 67/100
1/1 [==============================] - 0s 3ms/step - loss: 3.0283e-06
Epoch 68/100
1/1 [==============================] - 0s 3ms/step - loss: 3.7054e-06
Epoch 69/100
1/1 [==============================] - 0s 3ms/step - loss: 4.5496e-06
Epoch 70/100
1/1 [==============================] - 0s 3ms/step - loss: 5.5895e-06
Epoch 71/100
1/1 [==============================] - 0s 2ms/step - loss: 6.3181e-06
Epoch 72/100
1/1 [==============================] - 0s 8ms/step - loss: 6.7121e-06
Epoch 73/100
1/1 [==============================] - 0s 3ms/step - loss: 6.3456e-06
Epoch 74/100
1/1 [==============================] - 0s 3ms/step - loss: 5.7741e-06
Epoch 75/100
1/1 [==============================] - 0s 5ms/step - loss: 4.9495e-06
Epoch 76/100
1/1 [==============================] - 0s 3ms/step - loss: 4.3662e-06
Epoch 77/100
1/1 [==============================] - 0s 2ms/step - loss: 3.8714e-06
Epoch 78/100
1/1 [==============================] - 0s 3ms/step - loss: 3.6715e-06
Epoch 79/100
1/1 [==============================] - 0s 3ms/step - loss: 3.5924e-06
Epoch 80/100
1/1 [==============================] - 0s 3ms/step - loss: 3.7647e-06
Epoch 81/100
1/1 [==============================] - 0s 3ms/step - loss: 4.0260e-06
Epoch 82/100
1/1 [==============================] - 0s 3ms/step - loss: 4.4909e-06
Epoch 83/100
1/1 [==============================] - 0s 3ms/step - loss: 4.9258e-06
Epoch 84/100
1/1 [==============================] - 0s 3ms/step - loss: 5.4173e-06
Epoch 85/100
1/1 [==============================] - 0s 3ms/step - loss: 5.6309e-06
Epoch 86/100
1/1 [==============================] - 0s 3ms/step - loss: 5.7397e-06
Epoch 87/100
1/1 [==============================] - 0s 3ms/step - loss: 5.4911e-06
Epoch 88/100
1/1 [==============================] - 0s 2ms/step - loss: 5.2309e-06
Epoch 89/100
1/1 [==============================] - 0s 3ms/step - loss: 4.8097e-06
Epoch 90/100
1/1 [==============================] - 0s 2ms/step - loss: 4.5429e-06
Epoch 91/100
1/1 [==============================] - 0s 3ms/step - loss: 4.2706e-06
Epoch 92/100
1/1 [==============================] - 0s 3ms/step - loss: 4.2020e-06
Epoch 93/100
1/1 [==============================] - 0s 3ms/step - loss: 4.1644e-06
Epoch 94/100
1/1 [==============================] - 0s 2ms/step - loss: 4.3140e-06
Epoch 95/100
1/1 [==============================] - 0s 2ms/step - loss: 4.4615e-06
Epoch 96/100
###Markdown
Adding Layers and Nodes
###Code
# Initialize variables
period_length = 7
number_of_periods = 76
batch_size = 1
# Model 1: two LSTM layers
model_v1 = Sequential()
model_v1.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=True, stateful=False)) # note return_sequences is now true
# TASK:
# Add new LSTM layer to this network here.
#
model_v1.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=False, stateful=False))
model_v1.add(Dense(units=period_length))
model_v1.add(Activation("linear"))
model_v1.compile(loss="mse", optimizer="rmsprop")
%%time
train_model(model=model_v1, X=X_train, Y=Y_train, epochs=200, version=1, run_number=1)
###Output
Epoch 1/200
1/1 [==============================] - 0s 3ms/step - loss: 0.0033
Epoch 2/200
1/1 [==============================] - 0s 3ms/step - loss: 0.0023
Epoch 3/200
1/1 [==============================] - 0s 3ms/step - loss: 0.0017
Epoch 4/200
1/1 [==============================] - 0s 3ms/step - loss: 0.0014
Epoch 5/200
1/1 [==============================] - 0s 4ms/step - loss: 0.0011
Epoch 6/200
1/1 [==============================] - 0s 3ms/step - loss: 8.9148e-04
Epoch 7/200
1/1 [==============================] - 0s 3ms/step - loss: 7.1774e-04
Epoch 8/200
1/1 [==============================] - 0s 5ms/step - loss: 5.7647e-04
Epoch 9/200
1/1 [==============================] - 0s 3ms/step - loss: 4.6100e-04
Epoch 10/200
1/1 [==============================] - 0s 3ms/step - loss: 3.6666e-04
Epoch 11/200
1/1 [==============================] - 0s 3ms/step - loss: 2.8990e-04
Epoch 12/200
1/1 [==============================] - 0s 6ms/step - loss: 2.2787e-04
Epoch 13/200
1/1 [==============================] - 0s 4ms/step - loss: 1.7814e-04
Epoch 14/200
1/1 [==============================] - 0s 5ms/step - loss: 1.3864e-04
Epoch 15/200
1/1 [==============================] - 0s 4ms/step - loss: 1.0753e-04
Epoch 16/200
1/1 [==============================] - 0s 5ms/step - loss: 8.3225e-05
Epoch 17/200
1/1 [==============================] - 0s 4ms/step - loss: 6.4348e-05
Epoch 18/200
1/1 [==============================] - 0s 3ms/step - loss: 4.9750e-05
Epoch 19/200
1/1 [==============================] - 0s 3ms/step - loss: 3.8483e-05
Epoch 20/200
1/1 [==============================] - 0s 3ms/step - loss: 2.9783e-05
Epoch 21/200
1/1 [==============================] - 0s 3ms/step - loss: 2.3049e-05
Epoch 22/200
1/1 [==============================] - 0s 4ms/step - loss: 1.7818e-05
Epoch 23/200
1/1 [==============================] - 0s 3ms/step - loss: 1.3737e-05
Epoch 24/200
1/1 [==============================] - 0s 5ms/step - loss: 1.0543e-05
Epoch 25/200
1/1 [==============================] - 0s 6ms/step - loss: 8.0386e-06
Epoch 26/200
1/1 [==============================] - 0s 3ms/step - loss: 6.0767e-06
Epoch 27/200
1/1 [==============================] - 0s 6ms/step - loss: 4.5451e-06
Epoch 28/200
1/1 [==============================] - 0s 4ms/step - loss: 3.3576e-06
Epoch 29/200
1/1 [==============================] - 0s 6ms/step - loss: 2.4457e-06
Epoch 30/200
1/1 [==============================] - 0s 4ms/step - loss: 1.7541e-06
Epoch 31/200
1/1 [==============================] - 0s 3ms/step - loss: 1.2373e-06
Epoch 32/200
1/1 [==============================] - 0s 3ms/step - loss: 8.5813e-07
Epoch 33/200
1/1 [==============================] - 0s 3ms/step - loss: 5.9257e-07
Epoch 34/200
1/1 [==============================] - 0s 4ms/step - loss: 5.1149e-07
Epoch 35/200
1/1 [==============================] - 0s 3ms/step - loss: 2.0117e-06
Epoch 36/200
1/1 [==============================] - 0s 6ms/step - loss: 1.9858e-05
Epoch 37/200
1/1 [==============================] - 0s 4ms/step - loss: 6.8232e-05
Epoch 38/200
1/1 [==============================] - 0s 3ms/step - loss: 5.4285e-05
Epoch 39/200
1/1 [==============================] - 0s 3ms/step - loss: 1.9748e-05
Epoch 40/200
1/1 [==============================] - 0s 5ms/step - loss: 7.1180e-06
Epoch 41/200
1/1 [==============================] - 0s 3ms/step - loss: 2.8520e-06
Epoch 42/200
1/1 [==============================] - 0s 4ms/step - loss: 1.4474e-06
Epoch 43/200
1/1 [==============================] - 0s 4ms/step - loss: 8.9962e-07
Epoch 44/200
1/1 [==============================] - 0s 3ms/step - loss: 6.9883e-07
Epoch 45/200
1/1 [==============================] - 0s 4ms/step - loss: 6.6118e-07
Epoch 46/200
1/1 [==============================] - 0s 4ms/step - loss: 7.6576e-07
Epoch 47/200
1/1 [==============================] - 0s 7ms/step - loss: 1.0600e-06
Epoch 48/200
1/1 [==============================] - 0s 4ms/step - loss: 1.7482e-06
Epoch 49/200
1/1 [==============================] - 0s 3ms/step - loss: 3.3118e-06
Epoch 50/200
1/1 [==============================] - 0s 3ms/step - loss: 7.0027e-06
Epoch 51/200
1/1 [==============================] - 0s 3ms/step - loss: 1.4803e-05
Epoch 52/200
1/1 [==============================] - 0s 3ms/step - loss: 2.7069e-05
Epoch 53/200
1/1 [==============================] - 0s 3ms/step - loss: 3.3437e-05
Epoch 54/200
1/1 [==============================] - 0s 3ms/step - loss: 2.8281e-05
Epoch 55/200
1/1 [==============================] - 0s 3ms/step - loss: 1.7590e-05
Epoch 56/200
1/1 [==============================] - 0s 4ms/step - loss: 1.0636e-05
Epoch 57/200
1/1 [==============================] - 0s 3ms/step - loss: 6.5759e-06
Epoch 58/200
1/1 [==============================] - 0s 3ms/step - loss: 4.6864e-06
Epoch 59/200
1/1 [==============================] - 0s 3ms/step - loss: 3.7778e-06
Epoch 60/200
1/1 [==============================] - 0s 3ms/step - loss: 3.5863e-06
Epoch 61/200
1/1 [==============================] - 0s 3ms/step - loss: 3.8725e-06
Epoch 62/200
1/1 [==============================] - 0s 5ms/step - loss: 4.8104e-06
Epoch 63/200
1/1 [==============================] - 0s 4ms/step - loss: 6.5312e-06
Epoch 64/200
1/1 [==============================] - 0s 4ms/step - loss: 9.5472e-06
Epoch 65/200
1/1 [==============================] - 0s 7ms/step - loss: 1.3697e-05
Epoch 66/200
1/1 [==============================] - 0s 3ms/step - loss: 1.8414e-05
Epoch 67/200
1/1 [==============================] - 0s 6ms/step - loss: 2.0620e-05
Epoch 68/200
1/1 [==============================] - 0s 3ms/step - loss: 1.9805e-05
Epoch 69/200
1/1 [==============================] - 0s 4ms/step - loss: 1.5961e-05
Epoch 70/200
1/1 [==============================] - 0s 5ms/step - loss: 1.2413e-05
Epoch 71/200
1/1 [==============================] - 0s 4ms/step - loss: 9.3895e-06
Epoch 72/200
1/1 [==============================] - 0s 3ms/step - loss: 7.6691e-06
Epoch 73/200
1/1 [==============================] - 0s 6ms/step - loss: 6.6295e-06
Epoch 74/200
1/1 [==============================] - 0s 4ms/step - loss: 6.3934e-06
Epoch 75/200
1/1 [==============================] - 0s 4ms/step - loss: 6.6185e-06
Epoch 76/200
1/1 [==============================] - 0s 3ms/step - loss: 7.5228e-06
Epoch 77/200
1/1 [==============================] - 0s 4ms/step - loss: 8.8791e-06
Epoch 78/200
1/1 [==============================] - 0s 5ms/step - loss: 1.0911e-05
Epoch 79/200
1/1 [==============================] - 0s 3ms/step - loss: 1.2935e-05
Epoch 80/200
1/1 [==============================] - 0s 4ms/step - loss: 1.4811e-05
Epoch 81/200
1/1 [==============================] - 0s 3ms/step - loss: 1.5247e-05
Epoch 82/200
1/1 [==============================] - 0s 5ms/step - loss: 1.4772e-05
Epoch 83/200
1/1 [==============================] - 0s 3ms/step - loss: 1.3040e-05
Epoch 84/200
1/1 [==============================] - 0s 4ms/step - loss: 1.1434e-05
Epoch 85/200
1/1 [==============================] - 0s 4ms/step - loss: 9.7897e-06
Epoch 86/200
1/1 [==============================] - 0s 4ms/step - loss: 8.8217e-06
Epoch 87/200
1/1 [==============================] - 0s 5ms/step - loss: 8.1394e-06
Epoch 88/200
1/1 [==============================] - 0s 9ms/step - loss: 8.0703e-06
Epoch 89/200
1/1 [==============================] - 0s 5ms/step - loss: 8.2564e-06
Epoch 90/200
1/1 [==============================] - 0s 7ms/step - loss: 8.9634e-06
Epoch 91/200
1/1 [==============================] - 0s 5ms/step - loss: 9.7916e-06
Epoch 92/200
1/1 [==============================] - 0s 4ms/step - loss: 1.0954e-05
Epoch 93/200
1/1 [==============================] - 0s 4ms/step - loss: 1.1806e-05
Epoch 94/200
1/1 [==============================] - 0s 6ms/step - loss: 1.2556e-05
Epoch 95/200
1/1 [==============================] - 0s 5ms/step - loss: 1.2493e-05
Epoch 96/200
1/1 [==============================] - 0s 4ms/step - loss: 1.2203e-05
Epoch 97/200
1/1 [==============================] - 0s 12ms/step - loss: 1.1288e-05
Epoch 98/200
1/1 [==============================] - 0s 4ms/step - loss: 1.0545e-05
Epoch 99/200
1/1 [==============================] - 0s 5ms/step - loss: 9.6606e-06
Epoch 100/200
###Markdown
Epochs
###Code
# Model 2: two LSTM layers, trained for 300 epochs
model_v2 = Sequential()
model_v2.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=True, stateful=False))
model_v2.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=False, stateful=False))
model_v2.add(Dense(units=period_length))
model_v2.add(Activation("linear"))
model_v2.compile(loss="mse", optimizer="rmsprop")
%%time
# TASK:
# Change the number of epochs below
# to 300 and evaluate the results on TensorBoard.
#
train_model(model=model_v2, X=X_train, Y=Y_train, epochs=300, version=2, run_number=2)
###Output
Epoch 1/300
1/1 [==============================] - 0s 4ms/step - loss: 0.0025
Epoch 2/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0020
Epoch 3/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0017
Epoch 4/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0015
Epoch 5/300
1/1 [==============================] - 0s 4ms/step - loss: 0.0013
Epoch 6/300
1/1 [==============================] - 0s 5ms/step - loss: 0.0011
Epoch 7/300
1/1 [==============================] - 0s 5ms/step - loss: 0.0010
Epoch 8/300
1/1 [==============================] - 0s 6ms/step - loss: 9.0331e-04
Epoch 9/300
1/1 [==============================] - 0s 5ms/step - loss: 8.0278e-04
Epoch 10/300
1/1 [==============================] - ETA: 0s - loss: 7.1211e-0 - 0s 5ms/step - loss: 7.1211e-04
Epoch 11/300
1/1 [==============================] - 0s 3ms/step - loss: 6.2976e-04
Epoch 12/300
1/1 [==============================] - 0s 4ms/step - loss: 5.5459e-04
Epoch 13/300
1/1 [==============================] - 0s 4ms/step - loss: 4.8574e-04
Epoch 14/300
1/1 [==============================] - 0s 3ms/step - loss: 4.2261e-04
Epoch 15/300
1/1 [==============================] - 0s 5ms/step - loss: 3.6475e-04
Epoch 16/300
1/1 [==============================] - 0s 7ms/step - loss: 3.1189e-04
Epoch 17/300
1/1 [==============================] - 0s 5ms/step - loss: 2.6386e-04
Epoch 18/300
1/1 [==============================] - 0s 6ms/step - loss: 2.2055e-04
Epoch 19/300
1/1 [==============================] - 0s 5ms/step - loss: 1.8186e-04
Epoch 20/300
1/1 [==============================] - 0s 4ms/step - loss: 1.4770e-04
Epoch 21/300
1/1 [==============================] - 0s 6ms/step - loss: 1.1796e-04
Epoch 22/300
1/1 [==============================] - 0s 4ms/step - loss: 9.2456e-05
Epoch 23/300
1/1 [==============================] - 0s 5ms/step - loss: 7.0979e-05
Epoch 24/300
1/1 [==============================] - 0s 4ms/step - loss: 5.3253e-05
Epoch 25/300
1/1 [==============================] - 0s 3ms/step - loss: 3.8950e-05
Epoch 26/300
1/1 [==============================] - 0s 6ms/step - loss: 2.7701e-05
Epoch 27/300
1/1 [==============================] - 0s 5ms/step - loss: 1.9135e-05
Epoch 28/300
1/1 [==============================] - 0s 6ms/step - loss: 1.3463e-05
Epoch 29/300
1/1 [==============================] - 0s 4ms/step - loss: 1.7837e-05
Epoch 30/300
1/1 [==============================] - 0s 3ms/step - loss: 2.8002e-05
Epoch 31/300
1/1 [==============================] - 0s 4ms/step - loss: 1.8321e-05
Epoch 32/300
1/1 [==============================] - 0s 5ms/step - loss: 9.5034e-06
Epoch 33/300
1/1 [==============================] - 0s 3ms/step - loss: 4.3925e-06
Epoch 34/300
1/1 [==============================] - 0s 4ms/step - loss: 2.4838e-06
Epoch 35/300
1/1 [==============================] - 0s 4ms/step - loss: 1.4798e-06
Epoch 36/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0748e-06
Epoch 37/300
1/1 [==============================] - 0s 5ms/step - loss: 8.9914e-07
Epoch 38/300
1/1 [==============================] - 0s 4ms/step - loss: 9.6848e-07
Epoch 39/300
1/1 [==============================] - 0s 4ms/step - loss: 1.2206e-06
Epoch 40/300
1/1 [==============================] - 0s 4ms/step - loss: 1.8907e-06
Epoch 41/300
1/1 [==============================] - 0s 3ms/step - loss: 3.0874e-06
Epoch 42/300
1/1 [==============================] - 0s 3ms/step - loss: 5.5211e-06
Epoch 43/300
1/1 [==============================] - 0s 4ms/step - loss: 8.4423e-06
Epoch 44/300
1/1 [==============================] - 0s 6ms/step - loss: 1.1776e-05
Epoch 45/300
1/1 [==============================] - 0s 6ms/step - loss: 1.1141e-05
Epoch 46/300
1/1 [==============================] - 0s 5ms/step - loss: 1.0005e-05
Epoch 47/300
1/1 [==============================] - 0s 4ms/step - loss: 6.8713e-06
Epoch 48/300
1/1 [==============================] - 0s 3ms/step - loss: 5.4008e-06
Epoch 49/300
1/1 [==============================] - ETA: 0s - loss: 4.0060e-0 - 0s 5ms/step - loss: 4.0060e-06
Epoch 50/300
1/1 [==============================] - 0s 4ms/step - loss: 3.6309e-06
Epoch 51/300
1/1 [==============================] - 0s 4ms/step - loss: 3.5401e-06
Epoch 52/300
1/1 [==============================] - 0s 4ms/step - loss: 4.3858e-06
Epoch 53/300
1/1 [==============================] - 0s 4ms/step - loss: 6.3043e-06
Epoch 54/300
1/1 [==============================] - 0s 5ms/step - loss: 9.3864e-06
Epoch 55/300
1/1 [==============================] - 0s 4ms/step - loss: 1.1707e-05
Epoch 56/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0533e-05
Epoch 57/300
1/1 [==============================] - 0s 4ms/step - loss: 9.2512e-06
Epoch 58/300
1/1 [==============================] - 0s 3ms/step - loss: 6.5590e-06
Epoch 59/300
1/1 [==============================] - 0s 4ms/step - loss: 5.4459e-06
Epoch 60/300
1/1 [==============================] - 0s 4ms/step - loss: 4.2817e-06
Epoch 61/300
1/1 [==============================] - 0s 4ms/step - loss: 4.0838e-06
Epoch 62/300
1/1 [==============================] - 0s 3ms/step - loss: 3.9497e-06
Epoch 63/300
1/1 [==============================] - 0s 5ms/step - loss: 4.4268e-06
Epoch 64/300
1/1 [==============================] - 0s 3ms/step - loss: 4.9539e-06
Epoch 65/300
1/1 [==============================] - 0s 4ms/step - loss: 5.9270e-06
Epoch 66/300
1/1 [==============================] - 0s 4ms/step - loss: 6.7092e-06
Epoch 67/300
1/1 [==============================] - 0s 4ms/step - loss: 7.4407e-06
Epoch 68/300
1/1 [==============================] - 0s 4ms/step - loss: 7.6655e-06
Epoch 69/300
1/1 [==============================] - 0s 3ms/step - loss: 7.5457e-06
Epoch 70/300
1/1 [==============================] - 0s 4ms/step - loss: 7.7752e-06
Epoch 71/300
1/1 [==============================] - 0s 3ms/step - loss: 8.1178e-06
Epoch 72/300
1/1 [==============================] - 0s 3ms/step - loss: 9.7571e-06
Epoch 73/300
1/1 [==============================] - 0s 3ms/step - loss: 9.7056e-06
Epoch 74/300
1/1 [==============================] - 0s 3ms/step - loss: 9.4776e-06
Epoch 75/300
1/1 [==============================] - 0s 4ms/step - loss: 6.4893e-06
Epoch 76/300
1/1 [==============================] - 0s 5ms/step - loss: 4.8544e-06
Epoch 77/300
1/1 [==============================] - 0s 5ms/step - loss: 3.4713e-06
Epoch 78/300
1/1 [==============================] - 0s 5ms/step - loss: 2.8998e-06
Epoch 79/300
1/1 [==============================] - 0s 5ms/step - loss: 2.6399e-06
Epoch 80/300
1/1 [==============================] - 0s 6ms/step - loss: 2.7367e-06
Epoch 81/300
1/1 [==============================] - 0s 4ms/step - loss: 3.1209e-06
Epoch 82/300
1/1 [==============================] - 0s 5ms/step - loss: 3.8325e-06
Epoch 83/300
1/1 [==============================] - 0s 5ms/step - loss: 4.9348e-06
Epoch 84/300
1/1 [==============================] - 0s 5ms/step - loss: 6.2994e-06
Epoch 85/300
1/1 [==============================] - 0s 4ms/step - loss: 8.0342e-06
Epoch 86/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0670e-05
Epoch 87/300
1/1 [==============================] - 0s 3ms/step - loss: 1.4402e-05
Epoch 88/300
1/1 [==============================] - 0s 5ms/step - loss: 1.5231e-05
Epoch 89/300
1/1 [==============================] - 0s 5ms/step - loss: 1.0117e-05
Epoch 90/300
1/1 [==============================] - 0s 4ms/step - loss: 7.0775e-06
Epoch 91/300
1/1 [==============================] - 0s 3ms/step - loss: 6.1567e-06
Epoch 92/300
1/1 [==============================] - 0s 4ms/step - loss: 6.6251e-06
Epoch 93/300
1/1 [==============================] - 0s 5ms/step - loss: 7.6602e-06
Epoch 94/300
1/1 [==============================] - 0s 5ms/step - loss: 6.0420e-06
Epoch 95/300
1/1 [==============================] - 0s 4ms/step - loss: 4.8120e-06
Epoch 96/300
1/1 [==============================] - 0s 5ms/step - loss: 3.3091e-06
Epoch 97/300
1/1 [==============================] - 0s 4ms/step - loss: 2.8630e-06
Epoch 98/300
1/1 [==============================] - 0s 5ms/step - loss: 2.6785e-06
Epoch 99/300
###Markdown
Activation Functions
###Code
# Model 3: two LSTM layers, trained for 300 epochs,
# tanh activation function
model_v3 = Sequential()
model_v3.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=True, stateful=False))
model_v3.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=False, stateful=False))
model_v3.add(Dense(units=period_length))
# TASK:
# Change the activation function
# from "linear" to "tanh".
#
model_v3.add(Activation("tanh"))
model_v3.compile(loss="mse", optimizer="rmsprop")
%%time
train_model(model=model_v3, X=X_train, Y=Y_train, epochs=300, version=3, run_number=0)
###Output
Epoch 1/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0030
Epoch 2/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0025
Epoch 3/300
1/1 [==============================] - 0s 4ms/step - loss: 0.0022
Epoch 4/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0020
Epoch 5/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0018
Epoch 6/300
1/1 [==============================] - 0s 4ms/step - loss: 0.0016
Epoch 7/300
1/1 [==============================] - 0s 3ms/step - loss: 0.0015
Epoch 8/300
1/1 [==============================] - 0s 5ms/step - loss: 0.0013
Epoch 9/300
1/1 [==============================] - 0s 4ms/step - loss: 0.0012
Epoch 10/300
1/1 [==============================] - 0s 6ms/step - loss: 0.0011
Epoch 11/300
1/1 [==============================] - 0s 6ms/step - loss: 9.7731e-04
Epoch 12/300
1/1 [==============================] - 0s 4ms/step - loss: 8.7109e-04
Epoch 13/300
1/1 [==============================] - 0s 3ms/step - loss: 7.7089e-04
Epoch 14/300
1/1 [==============================] - 0s 3ms/step - loss: 6.7682e-04
Epoch 15/300
1/1 [==============================] - 0s 4ms/step - loss: 5.8904e-04
Epoch 16/300
1/1 [==============================] - 0s 5ms/step - loss: 5.0786e-04
Epoch 17/300
1/1 [==============================] - 0s 3ms/step - loss: 4.3395e-04
Epoch 18/300
1/1 [==============================] - 0s 3ms/step - loss: 3.7100e-04
Epoch 19/300
1/1 [==============================] - 0s 3ms/step - loss: 3.2057e-04
Epoch 20/300
1/1 [==============================] - 0s 3ms/step - loss: 2.7459e-04
Epoch 21/300
1/1 [==============================] - 0s 4ms/step - loss: 2.2347e-04
Epoch 22/300
1/1 [==============================] - 0s 3ms/step - loss: 1.7956e-04
Epoch 23/300
1/1 [==============================] - 0s 4ms/step - loss: 1.4356e-04
Epoch 24/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1433e-04
Epoch 25/300
1/1 [==============================] - 0s 4ms/step - loss: 9.0469e-05
Epoch 26/300
1/1 [==============================] - 0s 3ms/step - loss: 7.1456e-05
Epoch 27/300
1/1 [==============================] - 0s 4ms/step - loss: 5.6697e-05
Epoch 28/300
1/1 [==============================] - 0s 3ms/step - loss: 4.6189e-05
Epoch 29/300
1/1 [==============================] - 0s 6ms/step - loss: 3.9623e-05
Epoch 30/300
1/1 [==============================] - 0s 3ms/step - loss: 3.6275e-05
Epoch 31/300
1/1 [==============================] - 0s 4ms/step - loss: 3.3958e-05
Epoch 32/300
1/1 [==============================] - 0s 7ms/step - loss: 2.9938e-05
Epoch 33/300
1/1 [==============================] - ETA: 0s - loss: 2.4751e-0 - 0s 6ms/step - loss: 2.4751e-05
Epoch 34/300
1/1 [==============================] - 0s 3ms/step - loss: 1.9140e-05
Epoch 35/300
1/1 [==============================] - 0s 10ms/step - loss: 1.4875e-05
Epoch 36/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1693e-05
Epoch 37/300
1/1 [==============================] - 0s 4ms/step - loss: 9.7886e-06
Epoch 38/300
1/1 [==============================] - 0s 4ms/step - loss: 8.6889e-06
Epoch 39/300
1/1 [==============================] - 0s 5ms/step - loss: 8.4581e-06
Epoch 40/300
1/1 [==============================] - 0s 5ms/step - loss: 8.8152e-06
Epoch 41/300
1/1 [==============================] - 0s 4ms/step - loss: 9.8862e-06
Epoch 42/300
1/1 [==============================] - 0s 5ms/step - loss: 1.1280e-05
Epoch 43/300
1/1 [==============================] - 0s 9ms/step - loss: 1.2925e-05
Epoch 44/300
1/1 [==============================] - 0s 6ms/step - loss: 1.3987e-05
Epoch 45/300
1/1 [==============================] - 0s 6ms/step - loss: 1.4410e-05
Epoch 46/300
1/1 [==============================] - 0s 5ms/step - loss: 1.3734e-05
Epoch 47/300
1/1 [==============================] - 0s 4ms/step - loss: 1.2658e-05
Epoch 48/300
1/1 [==============================] - 0s 11ms/step - loss: 1.1283e-05
Epoch 49/300
1/1 [==============================] - 0s 6ms/step - loss: 1.0213e-05
Epoch 50/300
1/1 [==============================] - 0s 7ms/step - loss: 9.3813e-06
Epoch 51/300
1/1 [==============================] - 0s 6ms/step - loss: 9.0167e-06
Epoch 52/300
1/1 [==============================] - 0s 6ms/step - loss: 8.9578e-06
Epoch 53/300
1/1 [==============================] - 0s 4ms/step - loss: 9.3088e-06
Epoch 54/300
1/1 [==============================] - 0s 7ms/step - loss: 9.8889e-06
Epoch 55/300
1/1 [==============================] - 0s 5ms/step - loss: 1.0731e-05
Epoch 56/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1558e-05
Epoch 57/300
1/1 [==============================] - 0s 3ms/step - loss: 1.2315e-05
Epoch 58/300
1/1 [==============================] - 0s 4ms/step - loss: 1.2680e-05
Epoch 59/300
1/1 [==============================] - 0s 3ms/step - loss: 1.2710e-05
Epoch 60/300
1/1 [==============================] - 0s 6ms/step - loss: 1.2306e-05
Epoch 61/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1744e-05
Epoch 62/300
1/1 [==============================] - 0s 4ms/step - loss: 1.1075e-05
Epoch 63/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0543e-05
Epoch 64/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0143e-05
Epoch 65/300
1/1 [==============================] - 0s 5ms/step - loss: 9.9911e-06
Epoch 66/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0020e-05
Epoch 67/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0268e-05
Epoch 68/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0624e-05
Epoch 69/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1075e-05
Epoch 70/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1469e-05
Epoch 71/300
1/1 [==============================] - 0s 5ms/step - loss: 1.1779e-05
Epoch 72/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1881e-05
Epoch 73/300
1/1 [==============================] - 0s 5ms/step - loss: 1.1825e-05
Epoch 74/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1582e-05
Epoch 75/300
1/1 [==============================] - 0s 4ms/step - loss: 1.1282e-05
Epoch 76/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0941e-05
Epoch 77/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0682e-05
Epoch 78/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0491e-05
Epoch 79/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0442e-05
Epoch 80/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0480e-05
Epoch 81/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0636e-05
Epoch 82/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0822e-05
Epoch 83/300
1/1 [==============================] - 0s 5ms/step - loss: 1.1048e-05
Epoch 84/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1210e-05
Epoch 85/300
1/1 [==============================] - 0s 4ms/step - loss: 1.1329e-05
Epoch 86/300
1/1 [==============================] - 0s 3ms/step - loss: 1.1325e-05
Epoch 87/300
1/1 [==============================] - 0s 5ms/step - loss: 1.1261e-05
Epoch 88/300
1/1 [==============================] - 0s 4ms/step - loss: 1.1096e-05
Epoch 89/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0930e-05
Epoch 90/300
1/1 [==============================] - 0s 3ms/step - loss: 1.0734e-05
Epoch 91/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0604e-05
Epoch 92/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0495e-05
Epoch 93/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0479e-05
Epoch 94/300
1/1 [==============================] - 0s 6ms/step - loss: 1.0489e-05
Epoch 95/300
1/1 [==============================] - 0s 6ms/step - loss: 1.0575e-05
Epoch 96/300
1/1 [==============================] - 0s 6ms/step - loss: 1.0650e-05
Epoch 97/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0762e-05
Epoch 98/300
1/1 [==============================] - 0s 4ms/step - loss: 1.0812e-05
Epoch 99/300
###Markdown
Regularization Strategies
###Code
model_v4 = Sequential()
model_v4.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=True, stateful=False))
# TASK:
# Implement a Dropout() here.
#
model_v4.add(Dropout(0.5))
model_v4.add(LSTM(
units=period_length,
batch_input_shape=(batch_size, number_of_periods, period_length),
input_shape=(number_of_periods, period_length),
return_sequences=False, stateful=False))
# TASK:
# Implement a Dropout() here too.
#
model_v4.add(Dropout(0.5))
model_v4.add(Dense(units=period_length))
model_v4.add(Activation("tanh"))
model_v4.compile(loss="mse", optimizer="rmsprop")
%%time
train_model(model=model_v4, X=X_train, Y=Y_train, epochs=600, version=4, run_number=0)
###Output
Epoch 1/600
1/1 [==============================] - 0s 5ms/step - loss: 0.0057
Epoch 2/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0039
Epoch 3/600
1/1 [==============================] - 0s 3ms/step - loss: 0.0062
Epoch 4/600
1/1 [==============================] - 0s 3ms/step - loss: 0.0019
Epoch 5/600
1/1 [==============================] - 0s 3ms/step - loss: 0.0015
Epoch 6/600
1/1 [==============================] - 0s 4ms/step - loss: 9.9061e-04
Epoch 7/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0010
Epoch 8/600
1/1 [==============================] - 0s 5ms/step - loss: 0.0021
Epoch 9/600
1/1 [==============================] - 0s 6ms/step - loss: 9.7004e-04
Epoch 10/600
1/1 [==============================] - 0s 3ms/step - loss: 0.0015
Epoch 11/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0016
Epoch 12/600
1/1 [==============================] - 0s 5ms/step - loss: 0.0010
Epoch 13/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0014
Epoch 14/600
1/1 [==============================] - 0s 3ms/step - loss: 0.0017
Epoch 15/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0011
Epoch 16/600
1/1 [==============================] - 0s 4ms/step - loss: 9.8333e-04
Epoch 17/600
1/1 [==============================] - 0s 5ms/step - loss: 0.0012
Epoch 18/600
1/1 [==============================] - 0s 4ms/step - loss: 0.0011
Epoch 19/600
1/1 [==============================] - 0s 7ms/step - loss: 8.5947e-04
Epoch 20/600
1/1 [==============================] - 0s 5ms/step - loss: 0.0010
Epoch 21/600
1/1 [==============================] - 0s 7ms/step - loss: 9.7429e-04
Epoch 22/600
1/1 [==============================] - 0s 6ms/step - loss: 6.2077e-04
Epoch 23/600
1/1 [==============================] - 0s 5ms/step - loss: 7.3191e-04
Epoch 24/600
1/1 [==============================] - 0s 4ms/step - loss: 3.5046e-04
Epoch 25/600
1/1 [==============================] - 0s 4ms/step - loss: 8.0915e-04
Epoch 26/600
1/1 [==============================] - 0s 4ms/step - loss: 5.9664e-04
Epoch 27/600
1/1 [==============================] - 0s 4ms/step - loss: 5.8172e-04
Epoch 28/600
1/1 [==============================] - 0s 4ms/step - loss: 3.3433e-04
Epoch 29/600
1/1 [==============================] - 0s 3ms/step - loss: 4.5338e-04
Epoch 30/600
1/1 [==============================] - 0s 4ms/step - loss: 7.3317e-04
Epoch 31/600
1/1 [==============================] - 0s 6ms/step - loss: 9.4022e-04
Epoch 32/600
1/1 [==============================] - 0s 6ms/step - loss: 3.8563e-04
Epoch 33/600
1/1 [==============================] - 0s 6ms/step - loss: 1.1113e-04
Epoch 34/600
1/1 [==============================] - 0s 7ms/step - loss: 8.7462e-04
Epoch 35/600
1/1 [==============================] - 0s 4ms/step - loss: 5.6167e-04
Epoch 36/600
1/1 [==============================] - 0s 4ms/step - loss: 8.5118e-04
Epoch 37/600
1/1 [==============================] - 0s 5ms/step - loss: 2.5844e-04
Epoch 38/600
1/1 [==============================] - 0s 5ms/step - loss: 2.7805e-04
Epoch 39/600
1/1 [==============================] - 0s 4ms/step - loss: 1.6081e-04
Epoch 40/600
1/1 [==============================] - 0s 6ms/step - loss: 3.9850e-04
Epoch 41/600
1/1 [==============================] - 0s 6ms/step - loss: 2.9829e-04
Epoch 42/600
1/1 [==============================] - 0s 3ms/step - loss: 2.1742e-04
Epoch 43/600
1/1 [==============================] - 0s 3ms/step - loss: 3.0707e-04
Epoch 44/600
1/1 [==============================] - 0s 3ms/step - loss: 4.2882e-04
Epoch 45/600
1/1 [==============================] - 0s 3ms/step - loss: 2.4614e-04
Epoch 46/600
1/1 [==============================] - 0s 4ms/step - loss: 8.7706e-05
Epoch 47/600
1/1 [==============================] - 0s 3ms/step - loss: 3.9228e-04
Epoch 48/600
1/1 [==============================] - 0s 3ms/step - loss: 1.1301e-04
Epoch 49/600
1/1 [==============================] - 0s 3ms/step - loss: 2.5156e-04
Epoch 50/600
1/1 [==============================] - 0s 3ms/step - loss: 9.2614e-05
Epoch 51/600
1/1 [==============================] - 0s 4ms/step - loss: 3.3339e-04
Epoch 52/600
1/1 [==============================] - 0s 3ms/step - loss: 7.9552e-04
Epoch 53/600
1/1 [==============================] - 0s 4ms/step - loss: 3.4769e-04
Epoch 54/600
1/1 [==============================] - 0s 3ms/step - loss: 4.7558e-04
Epoch 55/600
1/1 [==============================] - 0s 3ms/step - loss: 5.6939e-05
Epoch 56/600
1/1 [==============================] - 0s 3ms/step - loss: 1.6952e-04
Epoch 57/600
1/1 [==============================] - 0s 3ms/step - loss: 4.0235e-04
Epoch 58/600
1/1 [==============================] - 0s 3ms/step - loss: 7.8758e-05
Epoch 59/600
1/1 [==============================] - 0s 3ms/step - loss: 4.7681e-05
Epoch 60/600
1/1 [==============================] - 0s 3ms/step - loss: 3.2756e-04
Epoch 61/600
1/1 [==============================] - 0s 3ms/step - loss: 2.1976e-04
Epoch 62/600
1/1 [==============================] - 0s 4ms/step - loss: 7.7294e-05
Epoch 63/600
1/1 [==============================] - 0s 5ms/step - loss: 4.4573e-04
Epoch 64/600
1/1 [==============================] - 0s 5ms/step - loss: 1.2028e-04
Epoch 65/600
1/1 [==============================] - 0s 7ms/step - loss: 1.2304e-04
Epoch 66/600
1/1 [==============================] - 0s 3ms/step - loss: 8.0001e-05
Epoch 67/600
1/1 [==============================] - 0s 5ms/step - loss: 2.1688e-04
Epoch 68/600
1/1 [==============================] - 0s 5ms/step - loss: 2.0145e-04
Epoch 69/600
1/1 [==============================] - 0s 4ms/step - loss: 1.1169e-04
Epoch 70/600
1/1 [==============================] - 0s 3ms/step - loss: 2.4862e-04
Epoch 71/600
1/1 [==============================] - 0s 4ms/step - loss: 3.4586e-04
Epoch 72/600
1/1 [==============================] - 0s 3ms/step - loss: 3.5447e-05
Epoch 73/600
1/1 [==============================] - 0s 3ms/step - loss: 1.5288e-04
Epoch 74/600
1/1 [==============================] - 0s 4ms/step - loss: 8.7729e-05
Epoch 75/600
1/1 [==============================] - 0s 4ms/step - loss: 2.1373e-04
Epoch 76/600
1/1 [==============================] - 0s 4ms/step - loss: 2.0293e-04
Epoch 77/600
1/1 [==============================] - 0s 3ms/step - loss: 9.6312e-05
Epoch 78/600
1/1 [==============================] - 0s 4ms/step - loss: 6.3516e-04
Epoch 79/600
1/1 [==============================] - 0s 4ms/step - loss: 1.5767e-04
Epoch 80/600
1/1 [==============================] - 0s 4ms/step - loss: 1.1163e-04
Epoch 81/600
1/1 [==============================] - 0s 4ms/step - loss: 1.0213e-04
Epoch 82/600
1/1 [==============================] - 0s 3ms/step - loss: 1.0602e-04
Epoch 83/600
1/1 [==============================] - 0s 3ms/step - loss: 1.3885e-04
Epoch 84/600
1/1 [==============================] - 0s 6ms/step - loss: 6.8016e-05
Epoch 85/600
1/1 [==============================] - 0s 6ms/step - loss: 1.1110e-04
Epoch 86/600
1/1 [==============================] - 0s 4ms/step - loss: 3.0868e-05
Epoch 87/600
1/1 [==============================] - 0s 5ms/step - loss: 2.4108e-04
Epoch 88/600
1/1 [==============================] - 0s 5ms/step - loss: 1.4052e-04
Epoch 89/600
1/1 [==============================] - 0s 3ms/step - loss: 9.9670e-05
Epoch 90/600
1/1 [==============================] - 0s 5ms/step - loss: 1.3773e-04
Epoch 91/600
1/1 [==============================] - 0s 4ms/step - loss: 1.6518e-04
Epoch 92/600
1/1 [==============================] - 0s 4ms/step - loss: 7.7616e-05
Epoch 93/600
1/1 [==============================] - 0s 3ms/step - loss: 1.7407e-04
Epoch 94/600
1/1 [==============================] - 0s 5ms/step - loss: 1.0948e-04
Epoch 95/600
1/1 [==============================] - 0s 5ms/step - loss: 2.9739e-05
Epoch 96/600
1/1 [==============================] - 0s 4ms/step - loss: 2.8472e-05
Epoch 97/600
1/1 [==============================] - 0s 4ms/step - loss: 1.5015e-05
Epoch 98/600
1/1 [==============================] - 0s 4ms/step - loss: 2.6345e-05
Epoch 99/600
1/1 [==============================] - 0s 4ms/step - loss: 1.7019e-04
Epoch 100/600
###Markdown
Evaluate Models
###Code
combined_set = np.concatenate((train_data, test_data), axis=1)
def evaluate_model(model, kind='series'):
"""Compute the MSE for all future weeks in period.
Parameters
----------
model: Keras trained model
kind: str, default 'series'
Kind of evaluation to perform. If 'series',
then the model will perform an evaluation
over the complete series.
Returns
-------
evaluated_weeks: list
List of MSE values for each evaluated
test week.
"""
if kind == 'series':
predicted_weeks = []
for i in range(0, test_data.shape[1]):
input_series = combined_set[0:,i:i+76]
predicted_weeks.append(model.predict(input_series))
predicted_days = []
for week in predicted_weeks:
predicted_days += list(week[0])
return predicted_days
else:
evaluated_weeks = []
for i in range(0, test_data.shape[1]):
input_series = combined_set[0:,i:i+77]
X_test = input_series[0:,:-1].reshape(1, input_series.shape[1] - 1, 7)
Y_test = input_series[0:,-1:][0]
result = model.evaluate(x=X_test, y=Y_test, verbose=0)
evaluated_weeks.append(result)
return evaluated_weeks
def plot_weekly_mse(series, model_name, color):
"""Plot weekly MSE."""
ax = pd.Series(series).plot(drawstyle="steps-post",
figsize=(14,4),
color=color,
grid=True,
label=model_name,
alpha=0.7,
title='Mean Squared Error (MSE) for Test Data (all models)'.format(
model_name))
ax.set_xticks(range(0, len(series)))
ax.set_xlabel("Predicted Week")
ax.set_ylabel("MSE")
return ax
def plot_weekly_predictions(predicted_days, name, display_plot=True,
variable='close'):
"""Plot weekly predictions and calculate RMSE and MAPE."""
# Create dataframe to store predictions and associated dates
last_day = datetime.strptime(train['date'].max(), '%Y-%m-%d')
list_of_days = []
for days in range(1, len(predicted_days) + 1):
D = (last_day + timedelta(days=days)).strftime('%Y-%m-%d')
list_of_days.append(D)
predicted = pd.DataFrame({
'date': list_of_days,
'close_point_relative_normalization': predicted_days
})
# Convert `date` variable to datetime
predicted['date'] = predicted['date'].apply(
lambda x: datetime.strptime(x, '%Y-%m-%d'))
# Create iso_week column in `predicted` dataframe
predicted['iso_week'] = predicted['date'].apply(
lambda x: x.strftime('%Y-%U'))
# Denormalize predictions
predicted_close = predicted.groupby('iso_week').apply(
lambda x: denormalize(test[:-3], x))
# Plot denormalized predictions and observed values
plot_two_series(test[:-3], predicted_close,
variable=variable,
title=f'{name}: Predictions per Week')
# Calculate RMSE and MAPE
print(f'RMSE: {rmse(test[:-3][variable], predicted_close[variable]):.2f}')
print(f'MAPE: {mape(test[:-3][variable], predicted_close[variable]):.2f}%')
# Evaluate each model trained in this activity in sequence
models = [model_v0, model_v1, model_v2, model_v3, model_v4]
for i, M in enumerate(models):
predicted_days = evaluate_model(M, kind='series')
plot_weekly_predictions(predicted_days, f'model_v{i}')
###Output
_____no_output_____ |
Chapter_16/SVM_regression.ipynb | ###Markdown
Support Vector Machines are perhaps one of the most popular and talked about machine learningalgorithms. They were extremely popular around the time they were developed in the 1990sand continue to be the go-to method for a high-performing algorithm with little tuning.The Maximal-Margin Classifier is a hypothetical classifier that best explains how SVM works inpractice. The numeric input variables (x) in your data (the columns) form an n-dimensionalspace. For example, if you had two input variables, this would form a two-dimensional space. Ahyperplane is a line that splits the input variable space. In SVM, a hyperplane is selected tobest separate the points in the input variable space by their class, either class 0 or class 1. Intwo-dimensions you can visualize this as a line and let’s assume that all of our input points canbe completely separated by this line. For example:B0 + (B1 × X1) + (B2 × X2) = 0Where the coefficients (B1 and B2) that determine the slope of the line and the intercept(B0) are found by the learning algorithm, and X1 and X2 are the two input variables.You canmake classifications using this line. By plugging in input values into the line equation, you cancalculate whether a new point is above or below the line.* Above the line, the equation returns a value greater than 0 and the point belongs to the first class (class 0).* Below the line, the equation returns a value less than 0 and the point belongs to the second class (class 1).* A value close to the line returns a value close to zero and the point may be difficult to classify.* If the magnitude of the value is large, the model may have more confidence in the prediction.The distance between the line and the closest data points is referred to as the margin. Thebest or optimal line that can separate the two classes is the line that as the largest margin.This is called the Maximal-Margin hyperplane. The margin is calculated as the perpendiculardistance from the line to only the closest points. Only these points are relevant in definingthe line and in the construction of the classifier. These points are called the support vectors.They support or define the hyperplane. The hyperplane is learned from training data using anoptimization procedure that maximizes the margin.In practice, real data is messy and cannot be separated perfectly with a hyperplane. Theconstraint of maximizing the margin of the line that separates the classes must be relaxed. Thisis often called the soft margin classifier. This change allows some points in the training data toviolate the separating line. An additional set of coefficients are introduced that give the marginwiggle room in each dimension. These coefficients are sometimes called slack variables. Thisincreases the complexity of the model as there are more parameters for the model to fit to thedata to provide this complexity.A tuning parameter is introduced called simply C that defines the magnitude of the wiggleallowed across all dimensions. The C parameters defines the amount of violation of the marginallowed. A C = 0 is no violation and we are back to the inflexible Maximal-Margin Classifierdescribed above. The larger the value of C the more violations of the hyperplane are permitted.During the learning of the hyperplane from data, all training instances that lie within thedistance of the margin will affect the placement of the hyperplane and are referred to as supportvectors. And as C affects the number of instances that are allowed to fall within the margin, Cinfluences the number of support vectors used by the model.* The smaller the value of C, the more sensitive the algorithm is to the training data (higher variance and lower bias).* The larger the value of C, the less sensitive the algorithm is to the training data (lower variance and higher bias).The SVM algorithm is implemented in practice using a kernel. The learning of the hyperplane inlinear SVM is done by transforming the problem using some linear algebra. A powerful insight is that the linear SVM can be rephrasedusing the inner product of any two given observations, rather than the observations themselves.The inner product between two vectors is the sum of the multiplication of each pair of inputvalues. For example, the inner product of the vectors [2, 3] and [5, 6] is 2 × 5 + 3 × 6 or 28. Theequation for making a prediction for a new input using the dot product between the input (x)and each support vector (x i ) is calculated as follows:![svm.png](attachment:svm.png)This is an equation that involves calculating the inner products of a new input vector (x)with all support vectors in training data. The coefficients B0 and a i (for each input) must beestimated from the training data by the learning algorithm.The SVM model needs to be solved using an optimization procedure. You can use a numericaloptimization procedure to search for the coefficients of the hyperplane. This is inefficient andis not the approach used in widely used SVM implementations like LIBSVM. If implementingthe algorithm as an exercise, you could use a variation of gradient descent called sub-gradientdescent.There are specialized optimization procedures that re-formulate the optimization problemto be a Quadratic Programming problem. The most popular method for fitting SVM is theSequential Minimal Optimization (SMO) method that is very efficient. It breaks the problemdown into sub-problems that can be solved analytically (by calculating) rather than numerically(by searching or optimizing). Kernel: The function used to map a lower dimensional data into a higher dimensional data.Hyper Plane: In SVM this is basically the separation line between the data classes. Although in SVR we are going to define it as the line that will will help us predict the continuous value or target valueBoundary line: In SVM there are two lines other than Hyper Plane which creates a margin . The support vectors can be on the Boundary lines or outside it. This boundary line separates the two classes. In SVR the concept is same.Support vectors: This are the data points which are closest to the boundary. The distance of the points is minimum or least. In simple regression we try to minimise the error rate. While in SVR we try to fit the error within a certain threshold.Our best fit line is the line hyperplane that has maximum number of points.What we are trying to do here is basically trying to decide a decision boundary at ‘e’ distance from the original hyper plane such that data points closest to the hyper plane or the support vectors are within that boundary line
###Code
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
print(dataset)
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Fitting SVR to the dataset
from sklearn.svm import SVR
regressor = SVR(kernel = 'rbf')
#rbf = Gaussian Radial Basis Function Kernel
regressor.fit(X_train, y_train)
# Predicting a new result
print(X_test)
y_pred = regressor.predict(X_test)
y_pred
y_test
###Output
_____no_output_____ |
predictions-final/Rookie_Predictions_Week1.ipynb | ###Markdown
###Code
# Installs
%%capture
!pip install category_encoders==2.0.0
# Import libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OrdinalEncoder
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
import category_encoders as ce
# Load the data
player_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/raw/players_full.csv')
kickers_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/actuals/actuals_rookie_kickers.csv')
offense_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/actuals/actuals_rookie_offense.csv')
kickers2019_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/actuals/actuals_rookie2019_kickers.csv')
offense2019_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/actuals/actuals_rookie2019_offense.csv')
rookies_df = pd.read_csv('https://raw.githubusercontent.com/Lambda-School-Labs/nfl-fantasy-ds/master/data/raw/rookies.csv')
actuals_df = pd.concat([kickers_df, offense_df], ignore_index=True)
actuals2019_df = pd.concat([kickers2019_df, offense2019_df], ignore_index=True)
# The players_df dataframe of all 2019 players and their rookie stats
player_df.head()
# The actual_df dataframe of actual rookie points for the rookie year of each veteran
actuals_df.head()
# The rookies_df dataframe of all 2019 rookies
rookies_df.head()
# The actuals2019_df dataframe of the actual rookie points for each 2019 rookie
actuals2019_df.head()
# The main code for iterating through the player list, calculating the points and adding the rows
# to the final_df dataframe.
def fill_df(p_df, tf):
column_names = ['player',
'position1',
'height',
'weight',
'forty',
'bench',
'vertical',
'broad',
'shuttle',
'cone',
'arm',
'hand',
'dpos',
'col',
'dv',
'points'
]
player_list = p_df['player'].tolist()
df = pd.DataFrame(columns = column_names)
for player in player_list:
position1 = player_df['position1'].loc[(player_df['player']==player)].iloc[0]
height = player_df['height'].loc[(player_df['player']==player)].iloc[0]
weight = player_df['weight'].loc[(player_df['player']==player)].iloc[0]
forty = player_df['forty'].loc[(player_df['player']==player)].iloc[0]
bench = player_df['bench'].loc[(player_df['player']==player)].iloc[0]
vertical = player_df['vertical'].loc[(player_df['player']==player)].iloc[0]
broad = player_df['broad'].loc[(player_df['player']==player)].iloc[0]
shuttle = player_df['shuttle'].loc[(player_df['player']==player)].iloc[0]
core = player_df['cone'].loc[(player_df['player']==player)].iloc[0]
arm = player_df['arm'].loc[(player_df['player']==player)].iloc[0]
hand = player_df['hand'].loc[(player_df['player']==player)].iloc[0]
dpos = player_df['dpos'].loc[(player_df['player']==player)].iloc[0]
college = player_df['col'].loc[(player_df['player']==player)].iloc[0]
division = player_df['dv'].loc[(player_df['player']==player)].iloc[0]
if tf == 'train':
points = actuals_df.loc[(actuals_df['player']==player)].iloc[0, 5:21].sum()
else:
points = actuals2019_df.loc[(actuals2019_df['player']==player)].iloc[0, 5:21].sum()
df = df.append({'player': player,
'position1': position1,
'height': height,
'weight': weight,
'forty': forty,
'bench': bench,
'vertical': vertical,
'broad': broad,
'shuttle': shuttle,
'cone': core,
'arm': arm,
'hand': hand,
'dpos': dpos,
'col': college,
'dv': division,
'points': points
}, ignore_index=True)
return df
# Create the train and test data series
train_df = fill_df(actuals_df, 'train')
test_df = fill_df(rookies_df, 'final')
# The training data of rookie year with a target of rookie year points
train_df.tail()
# The test data with a target of 2019 points
test_df.tail()
# Set up train, test and target for model
target = 'points'
X_train = train_df.drop(columns=[target])
y_train = train_df[target]
X_test = test_df.drop(columns=[target])
y_test = test_df[target]
# Split the initial train features and labels 80/20 into train and validate
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, train_size = 0.80, test_size = 0.20)
# Run the XGBoost model
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
XGBRegressor(n_estimators=200, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
# Print metrics for validation
val_mse = mean_squared_error(y_val, y_pred)
val_rmse = np.sqrt(val_mse)
val_mae = mean_absolute_error(y_val, y_pred)
val_r2 = r2_score(y_val, y_pred)
print('Validation Mean Absolute Error:', val_mae)
print('Validation R^2:', val_r2)
print('\n')
ty_pred = pipeline.predict(X_test)
# Print metrics for test
test_mse = mean_squared_error(y_test, ty_pred)
test_rmse = np.sqrt(test_mse)
test_mae = mean_absolute_error(y_test, ty_pred)
test_r2 = r2_score(y_test, ty_pred)
print('Test Mean Absolute Error:', test_mae)
print('Test R^2:', test_r2)
# Store the predictions in the results_df dataframe
results_df = test_df
results_df['prediction'] = ty_pred
results_df['diff'] = results_df['prediction'] - results_df['points']
results_df['percent'] = results_df['diff'] / results_df['prediction']
results_df.head()
# Add a row to the final_df dataframe
# Each row represents the predicted points for each rookie
def add_row(df, p, f, l, n, pos, cur, pred, act, diff, pct):
df = df.append({'player': p,
'first': f,
'last': l,
'name': n,
'position': pos,
'week1-cur': cur,
'week1-pred': pred,
'week1-act': act,
'week1-diff': diff,
'week1-pct': pct
}, ignore_index=True)
return df
# Iterate through the list of rookies to create the fina_df dataframe
column_names = ['player',
'first',
'last',
'name',
'position',
'week1-cur',
'week1-pred',
'week1-act',
'week1-diff',
'week1-pct'
]
player_list = results_df['player'].tolist()
final_df = pd.DataFrame(columns = column_names)
for player in player_list:
first = player_df['first'].loc[(player_df['player']==player)].iloc[0]
last = player_df['last'].loc[(player_df['player']==player)].iloc[0]
name = player_df['name'].loc[(player_df['player']==player)].iloc[0]
position = player_df['position1'].loc[(player_df['player']==player)].iloc[0]
week1_cur = 0
week1_pred = results_df['prediction'].loc[(results_df['player']==player)].iloc[0]
week1_act = results_df['points'].loc[(results_df['player']==player)].iloc[0]
week1_diff = results_df['diff'].loc[(results_df['player']==player)].iloc[0]
week1_pct = results_df['percent'].loc[(results_df['player']==player)].iloc[0]
final_df = add_row(final_df, player, first, last, name, position, week1_cur, week1_pred, week1_act, week1_diff, week1_pct)
# Convert pred to integer
final_df['week1-pred'] = final_df['week1-pred'].astype(int)
final_df.head()
# Save the results to .csv file
final_df.to_csv('/content/week1-pred-offense-rookies.csv', index=False)
###Output
_____no_output_____ |
_notebooks/2020-03-08-keras-neural-non-linear.ipynb | ###Markdown
Some Neural Network Classification> A programming introduction to NNs.- toc: true - badges: true- comments: true- author: Nipun Batra- categories: [ML]
###Code
from sklearn.datasets import make_moons
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
X, y = make_moons()
plt.scatter(X[:, 0], X[:, 1], c= y)
from keras.models import Sequential
from sklearn.metrics import accuracy_score
import os
from keras.layers import Dense, Activation
from keras.utils import to_categorical
model_simple = Sequential([
Dense(1, input_shape=(2,)),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model_complex = Sequential([
Dense(6, input_shape=(2,)),
Activation('relu'),
Dense(4),
Activation('relu'),
Dense(3),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model_complex_2 = Sequential([
Dense(10, input_shape=(2,)),
Activation('relu'),
Dense(8, ),
Activation('relu'),
Dense(8),
Activation('relu'),
Dense(2),
Activation('softmax'),
])
model_simple.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model_complex.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model_complex_2.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
def make_plot(X, y, model, dataset, model_type, noise, n_iter=80,cmap='PRGn'):
h=200
if dataset=="moon":
X, y = make_moons(noise=noise)
if dataset=="iris":
X, y = load_iris()['data'][:, :2], load_iris()['target']
print(X.shape, y.shape)
y_binary = to_categorical(y)
xx, yy = np.meshgrid(np.linspace(X[:, 0].min()-0.2, X[:, 0].max()+0.2, h),
np.linspace(X[:, 1].min()-0.2, X[:, 1].max()+0.2, h))
XX = np.c_[xx.ravel(), yy.ravel()]
for i in range(n_iter):
model.fit(X, y_binary, epochs=1, verbose=0)
Z = np.argmax(model.predict(XX), axis=1).reshape(xx.shape)
y_hat = np.argmax(model.predict(X), axis=1)
train_accuracy = accuracy_score(y, y_hat)
contours = plt.contourf(xx, yy, Z, h , cmap=cmap, alpha=0.4)
plt.title("Iteration: "+str(i)+"\n Accuracy:"+str(train_accuracy))
plt.colorbar()
plt.scatter(X[:, 0], X[:, 1], c= y, cmap=cmap)
if not os.path.exists(f"/Users/nipun/Desktop/animation-keras/{dataset}/{model_type}/{noise}/"):
os.makedirs(f"/Users/nipun/Desktop/animation-keras/{dataset}/{model_type}/{noise}/")
plt.savefig(f"/Users/nipun/Desktop/animation-keras/{dataset}/{model_type}/{noise}/{i:03}.png")
plt.clf()
make_plot(X, y, model_simple, "moon", "simple", None)
!convert -delay 20 -loop 0 /Users/nipun/Desktop/animation-keras/moon/simple/None/*.png moon-simple-none.gif
###Output
_____no_output_____
###Markdown
![](moon-simple-none.gif)
###Code
make_plot(X, y, model_complex, "moon", "complex", None, 500)
!convert -delay 20 -loop 0 /Users/nipun/Desktop/animation-keras/moon/complex/None/*.png moon-complex-none.gif
###Output
_____no_output_____
###Markdown
![](moon-complex-none.gif)
###Code
make_plot(X, y, model_complex_2, "moon", "complex", 0.3, 700)
!convert -delay 20 -loop 0 /Users/nipun/Desktop/animation-keras/moon/complex/0.3/*.png moon-complex-03.gif
###Output
_____no_output_____
###Markdown
![](moon-complex-03.gif)
###Code
model_simple_2 = Sequential([
Dense(1, input_shape=(2,)),
Activation('relu'),
Dense(3),
Activation('softmax'),
])
model_simple_2.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
make_plot(X, y, model_simple_2, "iris", "simple", None, 500)
!convert -delay 20 -loop 0 /Users/nipun/Desktop/animation-keras/iris/simple/None/*.png iris-simple.gif
###Output
_____no_output_____
###Markdown
![](iris-simple.gif)
###Code
model_complex_iris = Sequential([
Dense(12, input_shape=(2,)),
Activation('relu'),
Dense(6),
Activation('relu'),
Dense(4),
Activation('relu'),
Dense(3),
Activation('softmax'),
])
model_complex_iris.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
make_plot(X, y, model_complex_iris, "iris", "complex", None, 500)
!convert -delay 20 -loop 0 /Users/nipun/Desktop/animation-keras/iris/complex/None/*.png iris-complex.gif
###Output
_____no_output_____ |
02-NumPy/Numpy Exercises - Solved.ipynb | ###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* NumPy ExercisesNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.** IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! ** Import NumPy as np
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create an array of 10 zeros
###Code
# CODE HERE
np.zeros(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 ones
###Code
# CODE HERE
np.ones(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 fives
###Code
# CODE HERE
np.ones(10) * 5
###Output
_____no_output_____
###Markdown
Create an array of the integers from 10 to 50
###Code
# CODE HERE
np.arange(10, 51)
###Output
_____no_output_____
###Markdown
Create an array of all the even integers from 10 to 50
###Code
# CODE HERE
np.arange(10, 51, 2)
###Output
_____no_output_____
###Markdown
Create a 3x3 matrix with values ranging from 0 to 8
###Code
# CODE HERE
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
Create a 3x3 identity matrix
###Code
# CODE HERE
np.eye(3)
###Output
_____no_output_____
###Markdown
Use NumPy to generate a random number between 0 and 1
###Code
# CODE HERE
np.random.randn(1)
###Output
_____no_output_____
###Markdown
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
###Code
# CODE HERE
np.random.randn(25)
###Output
_____no_output_____
###Markdown
Create the following matrix:
###Code
np.arange(1, 101).reshape(10, 10) / 100
###Output
_____no_output_____
###Markdown
Create an array of 20 linearly spaced points between 0 and 1:
###Code
np.linspace(0, 1, 20)
###Output
_____no_output_____
###Markdown
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
###Code
# HERE IS THE GIVEN MATRIX CALLED MAT
# USE IT FOR THE FOLLOWING TASKS
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:, ]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3, -1]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3, 1].reshape(3, 1)
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[-1, :]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[-2:, :]
###Output
_____no_output_____
###Markdown
Now do the following Get the sum of all the values in mat
###Code
# CODE HERE
np.sum(mat)
###Output
_____no_output_____
###Markdown
Get the standard deviation of the values in mat
###Code
# CODE HERE
np.std(mat)
###Output
_____no_output_____
###Markdown
Get the sum of all the columns in mat
###Code
# CODE HERE
np.sum(mat, axis = 0)
###Output
_____no_output_____
###Markdown
Bonus QuestionWe worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? [Click Here for a Hint](https://www.google.com/search?q=numpy+random+seed&rlz=1C1CHBF_enUS747US747&oq=numpy+random+seed&aqs=chrome..69i57j69i60j0l4.2087j0j7&sourceid=chrome&ie=UTF-8)
###Code
# My favourite number is 7
np.random.seed(7)
###Output
_____no_output_____ |
.ipynb_checkpoints/FACT-checkpoint.ipynb | ###Markdown
Save saliency mapsFo
###Code
from save_all_grad_maps import *
###Output
_____no_output_____ |
ipynb/deprecated/examples/wlgen/rtapp_custom_example.ipynb | ###Markdown
Test environment setup
###Code
# Setup a target configuration
my_conf = {
# Define the kind of target platform to use for the experiments
"platform" : 'linux', # Linux system, valid other options are:
# android - access via ADB
# linux - access via SSH
# host - direct access
# Preload settings for a specific target
"board" : 'hikey960', # load JUNO specific settings, e.g.
# - HWMON based energy sampling
# - Juno energy model
# valid options are:
# - juno - JUNO Development Board
# - tc2 - TC2 Development Board
# - oak - Mediatek MT63xx based target
# Define devlib module to load
"modules" : [
'bl', # enable big.LITTLE support
'cpufreq', # enable CPUFreq support
'hwmon'
],
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"cpu_frequency"
],
"buffsize" : 10240
},
# Account to access the remote target
"host" : '192.168.0.1',
"username" : 'root',
"password" : 'root',
# Comment the following line to force rt-app calibration on your target
#"rtapp-calib" : {
# '0': 361, '1': 138, '2': 138, '3': 352, '4': 360, '5': 353
#}
"rtapp-calib" : {
"0": 302, "1": 302, "2": 302, "3": 302, "4": 136, "5": 136, "6": 136, "7": 136
},
}
te = TestEnv(target_conf=my_conf)
target = te.target
###Output
2018-02-23 11:49:28,952 INFO : TestEnv : Using base path: /data/work/lisa
2018-02-23 11:49:28,953 INFO : TestEnv : Loading custom (inline) target configuration
2018-02-23 11:49:28,954 INFO : TestEnv : Devlib modules to load: ['bl', 'cpuidle', 'cpufreq', 'hwmon']
2018-02-23 11:49:28,955 INFO : TestEnv : Connecting linux target:
2018-02-23 11:49:28,956 INFO : TestEnv : username : root
2018-02-23 11:49:28,956 INFO : TestEnv : host : 192.168.0.1
2018-02-23 11:49:28,957 INFO : TestEnv : password : root
2018-02-23 11:49:28,958 INFO : TestEnv : Connection settings:
2018-02-23 11:49:28,958 INFO : TestEnv : {'username': 'root', 'host': '192.168.0.1', 'password': 'root'}
2018-02-23 11:49:32,580 INFO : TestEnv : Initializing target workdir:
2018-02-23 11:49:32,582 INFO : TestEnv : /root/devlib-target
2018-02-23 11:49:35,880 INFO : TestEnv : Attempting to read energy model from target
2018-02-23 11:49:36,039 ERROR : TestEnv : Couldn't read target energy model: Energy Model not exposed in sysfs. Check CONFIG_SCHED_DEBUG is enabled.
2018-02-23 11:49:36,042 INFO : TestEnv : Topology:
2018-02-23 11:49:36,043 INFO : TestEnv : [[0, 1, 2, 3], [4, 5, 6, 7]]
2018-02-23 11:49:36,661 INFO : TestEnv : Loading default EM:
2018-02-23 11:49:36,663 INFO : TestEnv : /data/work/lisa/libs/utils/platforms/hikey960.json
2018-02-23 11:49:37,482 INFO : TestEnv : Enabled tracepoints:
2018-02-23 11:49:37,483 INFO : TestEnv : sched_switch
2018-02-23 11:49:37,483 INFO : TestEnv : cpu_frequency
2018-02-23 11:49:37,484 WARNING : TestEnv : Using configuration provided RTApp calibration
2018-02-23 11:49:37,485 INFO : TestEnv : Using RT-App calibration values:
2018-02-23 11:49:37,485 INFO : TestEnv : {"0": 302, "1": 302, "2": 302, "3": 302, "4": 136, "5": 136, "6": 136, "7": 136}
2018-02-23 11:49:37,486 INFO : TestEnv : Set results folder to:
2018-02-23 11:49:37,487 INFO : TestEnv : /data/work/lisa/results/20180223_114928
2018-02-23 11:49:37,487 INFO : TestEnv : Experiment results available also in:
2018-02-23 11:49:37,488 INFO : TestEnv : /data/work/lisa/results_latest
###Markdown
Create a new RTA workload generator object The wlgen::RTA class is a workload generator which exposes an API to configureRTApp based workload as well as to execute them on a target.
###Code
# Create a new RTApp workload generator
rtapp = RTA(
target=te.target, # Target execution on the local machine
name='example', # This is the name of the JSON configuration file reporting
# the generated RTApp configuration
#calibration={0: 10, 1: 11, 2: 12, 3: 13} # These are a set of fake
# # calibration values
)
###Output
2018-02-23 11:49:37,493 INFO : Workload : Setup new workload example
###Markdown
The function here, build_perf_benchmark_rtapp, demonstrates how we can build an rt-app job description which uses a controller task to unblock a number of child tasks, which then each perform a set amount of work. This is intended to simulate performance benchmarks. Many aspects of the test can be configured via parameters.
###Code
def build_perf_benchmark_rtapp(run_duration_ms, calibration_cpu_name, num_tasks, iterations, logdir="/data/local/tmp", file_name="perfbench.json"):
# static content
json_content = {
'global': {
'calibration': calibration_cpu_name,
'default_policy': "SCHED_OTHER",
'duration': -1,
'logdir': logdir
},
'tasks': {
'controller': {
'loop': iterations+2,
'phases': {
'init_delay': {
'sleep': run_duration_ms*4
}
}
}
}
}
# dynamic content (number of tasks)
for cpu in range(0,num_tasks):
bench_thread_name = "bench{}".format(cpu)
# describe the worker thread
json_content['tasks'][bench_thread_name] = {
'loop': iterations,
'phases': {
'go': {
'run': run_duration_ms
},
'wait': {
'suspend': bench_thread_name
}
}
}
# hook it to the controller
json_content['tasks']['controller']['phases']["trigger{}".format(cpu)] = { 'resume': bench_thread_name }
with open(file_name, 'w') as outfile:
json.dump(json_content, outfile,
sort_keys=True, indent=4, separators=(',', ': '))
return (file_name, json_content)
run_duration_ms = 500000
calibration = 'CPU4'
num_tasks = 8
iterations = 8
(filename, inline_config)=build_perf_benchmark_rtapp(run_duration_ms, calibration, num_tasks, iterations)
# Configure this RTApp instance to:
name=rtapp.conf(
# 1. generate a "profile based" set of tasks
#kind='profile',
kind='custom',
duration=-1,
# 2. define the "profile" of each task
# to use inline job description:
params=inline_config,
# to use filename instead:
# params="{}".format(filename),
# 3. use this folder for task logfiles
run_dir='/data/local/tmp'
)
logging.info('Generated RTApp JSON:')
print json.dumps(inline_config, indent=4, sort_keys=True)
te.ftrace.start()
logging.info('#### Start RTApp execution')
rtapp.run(cgroup="")
logging.info('#### Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(te.res_dir, 'trace.dat')
logging.info('#### Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('#### Save platform description: %s/platform.json', te.res_dir)
(plt, plt_file) = te.platform_dump(te.res_dir)
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(te.res_dir)
###Output
2018-02-23 11:52:04,388 INFO : root : #### Start RTApp execution
2018-02-23 11:52:04,390 INFO : Workload : Workload execution START:
2018-02-23 11:52:04,391 INFO : Workload : /root/devlib-target/bin/rt-app /data/local/tmp/example_00.json 2>&1
2018-02-23 11:52:32,764 INFO : root : #### Stop FTrace
2018-02-23 11:52:33,242 INFO : root : #### Save FTrace: /data/work/lisa/results/20180223_114928/trace.dat
2018-02-23 11:52:37,195 INFO : root : #### Save platform description: /data/work/lisa/results/20180223_114928/platform.json
|
MergeandJoin_Notes.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Combining Datasets: Merge and Join One essential feature offered by Pandas is its high-performance, in-memory join and merge operations.If you have ever worked with databases, you should be familiar with this type of data interaction.The main interface for this is the ``pd.merge`` function, and we'll see few examples of how this can work in practice.For convenience, we will start by redefining the ``display()`` functionality from the previous section:
###Code
import pandas as pd
import numpy as np
class display(object):
"""Display HTML representation of multiple objects"""
template = """<div style="float: left; padding: 10px;">
<p style='font-family:"Courier New", Courier, monospace'>{0}</p>{1}
</div>"""
def __init__(self, *args):
self.args = args
def _repr_html_(self):
return '\n'.join(self.template.format(a, eval(a)._repr_html_())
for a in self.args)
def __repr__(self):
return '\n\n'.join(a + '\n' + repr(eval(a))
for a in self.args)
###Output
_____no_output_____
###Markdown
Relational AlgebraThe behavior implemented in ``pd.merge()`` is a subset of what is known as *relational algebra*, which is a formal set of rules for manipulating relational data, and forms the conceptual foundation of operations available in most databases.The strength of the relational algebra approach is that it proposes several primitive operations, which become the building blocks of more complicated operations on any dataset.With this lexicon of fundamental operations implemented efficiently in a database or other program, a wide range of fairly complicated composite operations can be performed.Pandas implements several of these fundamental building-blocks in the ``pd.merge()`` function and the related ``join()`` method of ``Series`` and ``Dataframe``s.As we will see, these let you efficiently link data from different sources. Categories of JoinsThe ``pd.merge()`` function implements a number of types of joins: the *one-to-one*, *many-to-one*, and *many-to-many* joins.All three types of joins are accessed via an identical call to the ``pd.merge()`` interface; the type of join performed depends on the form of the input data.Here we will show simple examples of the three types of merges, and discuss detailed options further below. One-to-one joinsPerhaps the simplest type of merge expresion is the one-to-one join, which is in many ways very similar to the column-wise concatenation seen in [Combining Datasets: Concat & Append](03.06-Concat-And-Append.ipynb).As a concrete example, consider the following two ``DataFrames`` which contain information on several employees in a company:
###Code
df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'],
'group': ['Accounting', 'Engineering', 'Engineering', 'HR']})
df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'],
'hire_date': [2004, 2008, 2012, 2014]})
display('df1', 'df2')
###Output
_____no_output_____
###Markdown
To combine this information into a single ``DataFrame``, we can use the ``pd.merge()`` function:
###Code
df3 = pd.merge(df1, df2)
df3
###Output
_____no_output_____
###Markdown
The ``pd.merge()`` function recognizes that each ``DataFrame`` has an "employee" column, and automatically joins using this column as a key.The result of the merge is a new ``DataFrame`` that combines the information from the two inputs.Notice that the order of entries in each column is not necessarily maintained: in this case, the order of the "employee" column differs between ``df1`` and ``df2``, and the ``pd.merge()`` function correctly accounts for this.Additionally, keep in mind that the merge in general discards the index, except in the special case of merges by index (see the ``left_index`` and ``right_index`` keywords, discussed momentarily). Many-to-one joins Many-to-one joins are joins in which one of the two key columns contains duplicate entries.For the many-to-one case, the resulting ``DataFrame`` will preserve those duplicate entries as appropriate.Consider the following example of a many-to-one join:
###Code
df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'],
'supervisor': ['Carly', 'Guido', 'Steve']})
display('df3', 'df4', 'pd.merge(df3, df4)')
###Output
_____no_output_____
###Markdown
The resulting ``DataFrame`` has an aditional column with the "supervisor" information, where the information is repeated in one or more locations as required by the inputs. Many-to-many joins Many-to-many joins are a bit confusing conceptually, but are nevertheless well defined.If the key column in both the left and right array contains duplicates, then the result is a many-to-many merge.This will be perhaps most clear with a concrete example.Consider the following, where we have a ``DataFrame`` showing one or more skills associated with a particular group.By performing a many-to-many join, we can recover the skills associated with any individual person:
###Code
df5 = pd.DataFrame({'group': ['Accounting', 'Accounting',
'Engineering', 'Engineering', 'HR', 'HR'],
'skills': ['math', 'spreadsheets', 'coding', 'linux',
'spreadsheets', 'organization']})
display('df1', 'df5', "pd.merge(df1, df5)")
###Output
_____no_output_____
###Markdown
These three types of joins can be used with other Pandas tools to implement a wide array of functionality.But in practice, datasets are rarely as clean as the one we're working with here.In the following section we'll consider some of the options provided by ``pd.merge()`` that enable you to tune how the join operations work. Specification of the Merge Key We've already seen the default behavior of ``pd.merge()``: it looks for one or more matching column names between the two inputs, and uses this as the key.However, often the column names will not match so nicely, and ``pd.merge()`` provides a variety of options for handling this. The ``on`` keywordMost simply, you can explicitly specify the name of the key column using the ``on`` keyword, which takes a column name or a list of column names:
###Code
display('df1', 'df2', "pd.merge(df1, df2, on='employee')")
###Output
_____no_output_____
###Markdown
This option works only if both the left and right ``DataFrame``s have the specified column name. The ``left_on`` and ``right_on`` keywordsAt times you may wish to merge two datasets with different column names; for example, we may have a dataset in which the employee name is labeled as "name" rather than "employee".In this case, we can use the ``left_on`` and ``right_on`` keywords to specify the two column names:
###Code
df3 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'],
'salary': [70000, 80000, 120000, 90000]})
display('df1', 'df3', 'pd.merge(df1, df3, left_on="employee", right_on="name")')
###Output
_____no_output_____
###Markdown
The result has a redundant column that we can drop if desired–for example, by using the ``drop()`` method of ``DataFrame``s:
###Code
pd.merge(df1, df3, left_on="employee", right_on="name").drop('name', axis=1)
###Output
_____no_output_____
###Markdown
The ``left_index`` and ``right_index`` keywordsSometimes, rather than merging on a column, you would instead like to merge on an index.For example, your data might look like this:
###Code
df1a = df1.set_index('employee')
df2a = df2.set_index('employee')
display('df1a', 'df2a')
###Output
_____no_output_____
###Markdown
You can use the index as the key for merging by specifying the ``left_index`` and/or ``right_index`` flags in ``pd.merge()``:
###Code
display('df1a', 'df2a',
"pd.merge(df1a, df2a, left_index=True, right_index=True)")
###Output
_____no_output_____
###Markdown
For convenience, ``DataFrame``s implement the ``join()`` method, which performs a merge that defaults to joining on indices:
###Code
display('df1a', 'df2a', 'df1a.join(df2a)')
###Output
_____no_output_____
###Markdown
If you'd like to mix indices and columns, you can combine ``left_index`` with ``right_on`` or ``left_on`` with ``right_index`` to get the desired behavior:
###Code
display('df1a', 'df3', "pd.merge(df1a, df3, left_index=True, right_on='name')")
###Output
_____no_output_____
###Markdown
All of these options also work with multiple indices and/or multiple columns; the interface for this behavior is very intuitive.For more information on this, see the ["Merge, Join, and Concatenate" section](http://pandas.pydata.org/pandas-docs/stable/merging.html) of the Pandas documentation. Specifying Set Arithmetic for Joins In all the preceding examples we have glossed over one important consideration in performing a join: the type of set arithmetic used in the join.This comes up when a value appears in one key column but not the other. Consider this example:
###Code
df6 = pd.DataFrame({'name': ['Peter', 'Paul', 'Mary'],
'food': ['fish', 'beans', 'bread']},
columns=['name', 'food'])
df7 = pd.DataFrame({'name': ['Mary', 'Joseph'],
'drink': ['wine', 'beer']},
columns=['name', 'drink'])
display('df6', 'df7', 'pd.merge(df6, df7)')
###Output
_____no_output_____
###Markdown
Here we have merged two datasets that have only a single "name" entry in common: Mary.By default, the result contains the *intersection* of the two sets of inputs; this is what is known as an *inner join*.We can specify this explicitly using the ``how`` keyword, which defaults to ``"inner"``:
###Code
pd.merge(df6, df7, how='inner')
###Output
_____no_output_____
###Markdown
Other options for the ``how`` keyword are ``'outer'``, ``'left'``, and ``'right'``.An *outer join* returns a join over the union of the input columns, and fills in all missing values with NAs:
###Code
display('df6', 'df7', "pd.merge(df6, df7, how='outer')")
###Output
_____no_output_____
###Markdown
The *left join* and *right join* return joins over the left entries and right entries, respectively.For example:
###Code
display('df6', 'df7', "pd.merge(df6, df7, how='left')")
###Output
_____no_output_____
###Markdown
The output rows now correspond to the entries in the left input. Using``how='right'`` works in a similar manner.All of these options can be applied straightforwardly to any of the preceding join types. Overlapping Column Names: The ``suffixes`` Keyword Finally, you may end up in a case where your two input ``DataFrame``s have conflicting column names.Consider this example:
###Code
df8 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'],
'rank': [1, 2, 3, 4]})
df9 = pd.DataFrame({'name': ['Bob', 'Jake', 'Lisa', 'Sue'],
'rank': [3, 1, 4, 2]})
display('df8', 'df9', 'pd.merge(df8, df9, on="name")')
###Output
_____no_output_____
###Markdown
Because the output would have two conflicting column names, the merge function automatically appends a suffix ``_x`` or ``_y`` to make the output columns unique.If these defaults are inappropriate, it is possible to specify a custom suffix using the ``suffixes`` keyword:
###Code
display('df8', 'df9', 'pd.merge(df8, df9, on="name", suffixes=["_L", "_R"])')
###Output
_____no_output_____
###Markdown
These suffixes work in any of the possible join patterns, and work also if there are multiple overlapping columns. For more information on these patterns, see [Aggregation and Grouping](03.08-Aggregation-and-Grouping.ipynb) where we dive a bit deeper into relational algebra.Also see the [Pandas "Merge, Join and Concatenate" documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html) for further discussion of these topics. Example: US States DataMerge and join operations come up most often when combining data from different sources.Here we will consider an example of some data about US states and their populations.The data files can be found at http://github.com/jakevdp/data-USstates/:
###Code
# Following are shell commands to download the data
# !curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-population.csv
# !curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-areas.csv
# !curl -O https://raw.githubusercontent.com/jakevdp/data-USstates/master/state-abbrevs.csv
###Output
_____no_output_____
###Markdown
Let's take a look at the three datasets, using the Pandas ``read_csv()`` function:
###Code
pop = pd.read_csv('data/state-population.csv')
areas = pd.read_csv('data/state-areas.csv')
abbrevs = pd.read_csv('data/state-abbrevs.csv')
display('pop.head()', 'areas.head()', 'abbrevs.head()')
###Output
_____no_output_____
###Markdown
Given this information, say we want to compute a relatively straightforward result: rank US states and territories by their 2010 population density.We clearly have the data here to find this result, but we'll have to combine the datasets to find the result.We'll start with a many-to-one merge that will give us the full state name within the population ``DataFrame``.We want to merge based on the ``state/region`` column of ``pop``, and the ``abbreviation`` column of ``abbrevs``.We'll use ``how='outer'`` to make sure no data is thrown away due to mismatched labels.
###Code
merged = pd.merge(pop, abbrevs, how='outer',
left_on='state/region', right_on='abbreviation')
merged = merged.drop('abbreviation', 1) # drop duplicate info
merged.head()
###Output
_____no_output_____
###Markdown
Let's double-check whether there were any mismatches here, which we can do by looking for rows with nulls:
###Code
merged.isnull().any()
###Output
_____no_output_____
###Markdown
Some of the ``population`` info is null; let's figure out which these are!
###Code
merged[merged['population'].isnull()].head()
###Output
_____no_output_____
###Markdown
It appears that all the null population values are from Puerto Rico prior to the year 2000; this is likely due to this data not being available from the original source.More importantly, we see also that some of the new ``state`` entries are also null, which means that there was no corresponding entry in the ``abbrevs`` key!Let's figure out which regions lack this match:
###Code
merged.loc[merged['state'].isnull(), 'state/region'].unique()
###Output
_____no_output_____
###Markdown
We can quickly infer the issue: our population data includes entries for Puerto Rico (PR) and the United States as a whole (USA), while these entries do not appear in the state abbreviation key.We can fix these quickly by filling in appropriate entries:
###Code
merged.loc[merged['state/region'] == 'PR', 'state'] = 'Puerto Rico'
merged.loc[merged['state/region'] == 'USA', 'state'] = 'United States'
merged.isnull().any()
###Output
_____no_output_____
###Markdown
No more nulls in the ``state`` column: we're all set!Now we can merge the result with the area data using a similar procedure.Examining our results, we will want to join on the ``state`` column in both:
###Code
final = pd.merge(merged, areas, on='state', how='left')
final.head()
###Output
_____no_output_____
###Markdown
Again, let's check for nulls to see if there were any mismatches:
###Code
final.isnull().any()
###Output
_____no_output_____
###Markdown
There are nulls in the ``area`` column; we can take a look to see which regions were ignored here:
###Code
final['state'][final['area (sq. mi)'].isnull()].unique()
###Output
_____no_output_____
###Markdown
We see that our ``areas`` ``DataFrame`` does not contain the area of the United States as a whole.We could insert the appropriate value (using the sum of all state areas, for instance), but in this case we'll just drop the null values because the population density of the entire United States is not relevant to our current discussion:
###Code
final.dropna(inplace=True)
final.head()
###Output
_____no_output_____
###Markdown
Now we have all the data we need. To answer the question of interest, let's first select the portion of the data corresponding with the year 2000, and the total population.We'll use the ``query()`` function to do this quickly (this requires the ``numexpr`` package to be installed; see [High-Performance Pandas: ``eval()`` and ``query()``](03.12-Performance-Eval-and-Query.ipynb)):
###Code
data2010 = final.query("year == 2010 & ages == 'total'")
data2010.head()
###Output
_____no_output_____
###Markdown
Now let's compute the population density and display it in order.We'll start by re-indexing our data on the state, and then compute the result:
###Code
data2010.set_index('state', inplace=True)
density = data2010['population'] / data2010['area (sq. mi)']
density.sort_values(ascending=False, inplace=True)
density.head()
###Output
_____no_output_____
###Markdown
The result is a ranking of US states plus Washington, DC, and Puerto Rico in order of their 2010 population density, in residents per square mile.We can see that by far the densest region in this dataset is Washington, DC (i.e., the District of Columbia); among states, the densest is New Jersey.We can also check the end of the list:
###Code
density.tail()
###Output
_____no_output_____ |
Fairness_Survey/ALGORITHMS/Reweighing/Violent.ipynb | ###Markdown
INSTALLATION
###Code
!pip install aif360
!pip install fairlearn
!apt-get install -jre
!java -version
!pip install h2o
!pip install xlsxwriter
!pip install BlackBoxAuditing
###Output
Requirement already satisfied: BlackBoxAuditing in /usr/local/lib/python3.7/dist-packages (0.1.54)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (2.6.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (1.19.5)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (1.1.5)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing) (3.2.2)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->BlackBoxAuditing) (1.3.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->BlackBoxAuditing) (1.15.0)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->BlackBoxAuditing) (2018.9)
###Markdown
IMPORTS
###Code
import numpy as np
from mlxtend.feature_selection import ExhaustiveFeatureSelector
from xgboost import XGBClassifier
# import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import openpyxl
import xlsxwriter
from openpyxl import load_workbook
import BlackBoxAuditing
import shap
#suppress setwith copy warning
pd.set_option('mode.chained_assignment',None)
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest, SelectFwe, SelectPercentile,SelectFdr, SelectFpr, SelectFromModel
from sklearn.feature_selection import chi2, mutual_info_classif
# from skfeature.function.similarity_based import fisher_score
import aif360
import matplotlib.pyplot as plt
from aif360.metrics.classification_metric import ClassificationMetric
from aif360.metrics import BinaryLabelDatasetMetric
from aif360.algorithms.preprocessing import DisparateImpactRemover, Reweighing, LFR,OptimPreproc
from aif360.datasets import StandardDataset , BinaryLabelDataset
from sklearn.preprocessing import MinMaxScaler
MM= MinMaxScaler()
import h2o
from h2o.automl import H2OAutoML
from h2o.estimators.glm import H2OGeneralizedLinearEstimator
import sys
sys.path.append("../")
import os
h2o.init()
###Output
Checking whether there is an H2O instance running at http://localhost:54321 ..... not found.
Attempting to start a local H2O server...
Java Version: openjdk version "11.0.11" 2021-04-20; OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.18.04); OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.18.04, mixed mode, sharing)
Starting server from /usr/local/lib/python3.7/dist-packages/h2o/backend/bin/h2o.jar
Ice root: /tmp/tmp19sl5gpa
JVM stdout: /tmp/tmp19sl5gpa/h2o_unknownUser_started_from_python.out
JVM stderr: /tmp/tmp19sl5gpa/h2o_unknownUser_started_from_python.err
Server is running at http://127.0.0.1:54321
Connecting to H2O server at http://127.0.0.1:54321 ... successful.
###Markdown
**************************LOADING DATASET*******************************
###Code
from google.colab import drive
drive.mount('/content/gdrive')
# tr=pd.read_csv(r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Test/Test50.csv')
# tr
# tester= BinaryLabelDataset(favorable_label=1,
# unfavorable_label=0,
# df=tr,
# label_names=['two_year_recid'],
# protected_attribute_names=['race'],
# unprivileged_protected_attributes=[[0]],
# privileged_protected_attributes=[[1]])
# tester
# DIR = DisparateImpactRemover(repair_level=1.0)
# DI_Test = DIR.fit_transform(tester)
###Output
_____no_output_____
###Markdown
GBM REWEIGHING
###Code
for i in range(11,51,1):
train_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Train'
train_path= os.path.join(train_url ,("Train"+ str(i)+ ".csv"))
train= pd.read_csv(train_path)
first_column = train.pop('two_year_recid')
train.insert(0, 'two_year_recid', first_column)
test_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Test'
test_path= os.path.join(test_url ,("Test"+ str(i)+ ".csv"))
test= pd.read_csv(test_path)
first_column = test.pop('two_year_recid')
test.insert(0, 'two_year_recid', first_column)
#********************************************************binary labels for Reweighing*************************************************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
bldTrain= BinaryLabelDataset(favorable_label=1,
unfavorable_label=0,
df=train,
label_names=['two_year_recid'],
protected_attribute_names=['race'],
unprivileged_protected_attributes=[[0]],
privileged_protected_attributes=[[1]])
bldTest= BinaryLabelDataset(favorable_label=1,
unfavorable_label=0,
df=test,
label_names=['two_year_recid'],
protected_attribute_names=['race'],
unprivileged_protected_attributes=[[0]],
privileged_protected_attributes=[[1]])
#*******************************************************DI Remover instance**************************************************************
Reweigh = Reweighing(unprivileged_groups=advantagedGroup ,
privileged_groups=disadvantagedGroup )
RW_Train = Reweigh .fit_transform(bldTrain )
RW_Test = Reweigh .fit_transform(bldTest)
#getting weights from reweighing to be added to Train set
weight= pd.DataFrame(data=RW_Train.instance_weights, columns= ['weight'])
weight=h2o.H2OFrame(weight)
#*****************************************Repaired Train and Test Set*******************************************************
train= pd.DataFrame(np.hstack([RW_Train .labels,RW_Train .features]),columns=train.columns)
test= pd.DataFrame(np.hstack([RW_Test .labels,RW_Test .features]),columns= test.columns)
# TotalRepairedDF= pd.concat([RepairedTrain ,RepairedTest ])
# normalization of train and test sets
Fitter= MM.fit(train)
transformed_train=Fitter.transform(train)
train=pd.DataFrame(transformed_train, columns= train.columns)
#test normalization
transformed_test=Fitter.transform(test)
test=pd.DataFrame(transformed_test, columns= test.columns)
# *************CHECKING FAIRNESS IN DATASET**************************
## ****************CONVERTING TO BLD FORMAT******************************
#Transforming the Train and Test Set to BinaryLabel
class Test(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(Test, self).__init__(df=test , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_Test= Test(protected_attribute_names= ['race'],
privileged_classes= [[1]])
## ********************Checking Bias Repaired Data********************************
DataBias_Checker = BinaryLabelDatasetMetric(BLD_Test , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
dsp= DataBias_Checker .statistical_parity_difference()
dif= DataBias_Checker.consistency()
ddi= DataBias_Checker.disparate_impact()
print('The Statistical Parity diference is = {diff}'.format(diff= dsp ))
print('Individual Fairness is = {IF}'.format( IF= dif ))
print('Disparate Impact is = {IF}'.format( IF= ddi ))
# ********************SETTING TO H20 FRAME AND MODEL TRAINING*******************************
x = list(train.columns)
y = "two_year_recid"
x.remove(y)
Train=h2o.H2OFrame(train)
#adding weights to train set
Train= Train.cbind(weight)
Test= h2o.H2OFrame(test)
Train[y] = Train[y].asfactor()
Test[y] = Test[y].asfactor()
aml = H2OAutoML(max_models=10, nfolds=10, include_algos=['GBM'] , stopping_metric='AUTO') #verbosity='info',,'GBM', 'DRF'
aml.train(x=x, y=y, training_frame=Train,weights_column ='weight')
best_model= aml.leader
# a.model_performance()
#**********************REPLACE LABELS OF DUPLICATED TEST SET WITH PREDICTIONS****************************
#predicted labels
gbm_Predictions= best_model.predict(Test)
gbm_Predictions= gbm_Predictions.as_data_frame()
predicted_df= test.copy()
predicted_df['two_year_recid']= gbm_Predictions.predict.to_numpy()
# ********************COMPUTE DISCRIMINATION*****************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
class PredTest(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(PredTest, self).__init__(df=predicted_df , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_PredTest= PredTest(protected_attribute_names= ['race'],
privileged_classes= [[1]])
# # Workbook= pd.ExcelFile(r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx')
# excelBook= load_workbook('/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx')
# OldDF= excelBook.get_sheet_by_name("Violent")#pd.read_excel(Workbook,sheet_name='Violent')
#load workbook
excelBook= load_workbook('/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/GBM/gbm_Results.xlsx')
Violent= excelBook['Violent']
data= Violent.values
# Get columns
columns = next(data)[0:]
10# Create a DataFrame based on the second and subsequent lines of data
OldDF = pd.DataFrame(data, columns=columns)
ClassifierBias = ClassificationMetric( BLD_Test,BLD_PredTest , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
Accuracy= ClassifierBias.accuracy()
TPR= ClassifierBias.true_positive_rate()
TNR= ClassifierBias.true_negative_rate()
NPV= ClassifierBias.negative_predictive_value()
PPV= ClassifierBias.positive_predictive_value()
SP=ClassifierBias .statistical_parity_difference()
IF=ClassifierBias.consistency()
DI=ClassifierBias.disparate_impact()
EOP=ClassifierBias.true_positive_rate_difference()
EO=ClassifierBias.average_odds_difference()
FDR= ClassifierBias.false_discovery_rate(privileged=False)- ClassifierBias.false_discovery_rate(privileged=True)
NPV_diff=ClassifierBias.negative_predictive_value(privileged=False)-ClassifierBias.negative_predictive_value(privileged=True)
FOR=ClassifierBias.false_omission_rate(privileged=False)-ClassifierBias.false_omission_rate(privileged=True)
PPV_diff=ClassifierBias.positive_predictive_value(privileged=False) -ClassifierBias.positive_predictive_value(privileged=True)
BGE = ClassifierBias.between_group_generalized_entropy_index()
WGE = ClassifierBias.generalized_entropy_index()-ClassifierBias.between_group_generalized_entropy_index()
BGTI = ClassifierBias.between_group_theil_index()
WGTI = ClassifierBias.theil_index() -ClassifierBias.between_group_theil_index()
EDF= ClassifierBias.differential_fairness_bias_amplification()
newdf= pd.DataFrame(index = [0], data= { 'ACCURACY': Accuracy,'TPR': TPR, 'PPV':PPV, 'TNR':TNR,'NPV':NPV,'SP':SP,'CONSISTENCY':IF,'DI':DI,'EOP':EOP,'EO':EO,'FDR':FDR,'NPV_diff':NPV_diff,
'FOR':FOR,'PPV_diff':PPV_diff,'BGEI':BGE,'WGEI':WGE,'BGTI':BGTI,'WGTI':WGTI,'EDF':EDF,
'DATA_SP':dsp,'DATA_CONS':dif,'DATA_DI':ddi})
newdf=pd.concat([OldDF,newdf])
pathway= r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/GBM/gbm_Results.xlsx"
with pd.ExcelWriter(pathway, engine='openpyxl') as writer:
#load workbook base as for writer
writer.book= excelBook
writer.sheets=dict((ws.title, ws) for ws in excelBook.worksheets)
newdf.to_excel(writer, sheet_name='Violent', index=False)
# newdf.to_excel(writer, sheet_name='Adult', index=False)
print('Accuracy', Accuracy)
###Output
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.055323802725587914
Individual Fairness is = [0.81147132]
Disparate Impact is = 1.4779361846571624
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7817955112219451
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.08433684794672587
Individual Fairness is = [0.80099751]
Disparate Impact is = 1.6951400800457406
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7930174563591023
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.05778649052943628
Individual Fairness is = [0.80299252]
Disparate Impact is = 1.4639429668220456
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7743142144638404
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.101171875
Individual Fairness is = [0.7967581]
Disparate Impact is = 2.01171875
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7306733167082294
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.035035155001597956
Individual Fairness is = [0.8042394]
Disparate Impact is = 1.2546457607433217
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8042394014962594
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0930054817625975
Individual Fairness is = [0.8074813]
Disparate Impact is = 1.8893649193548385
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7643391521197007
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.028307439962433123
Individual Fairness is = [0.82369077]
Disparate Impact is = 1.214428857715431
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7693266832917706
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.08867044413919414
Individual Fairness is = [0.81446384]
Disparate Impact is = 1.8407271740605073
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7992518703241895
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.05318609122956949
Individual Fairness is = [0.79526185]
Disparate Impact is = 1.3839776342427457
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8067331670822943
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.072985386782873
Individual Fairness is = [0.81396509]
Disparate Impact is = 1.6246690457004718
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7880299251870324
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.04727370400214881
Individual Fairness is = [0.83092269]
Disparate Impact is = 1.4313725490196079
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7942643391521197
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.03880367996719511
Individual Fairness is = [0.78678304]
Disparate Impact is = 1.2261060582703869
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.786783042394015
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.1212823275862069
Individual Fairness is = [0.80897756]
Disparate Impact is = 2.352764423076923
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.786783042394015
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.04903915818526823
Individual Fairness is = [0.81745636]
Disparate Impact is = 1.3966402500279047
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8092269326683291
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.08157399946279884
Individual Fairness is = [0.81072319]
Disparate Impact is = 1.7939869281045753
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7780548628428927
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06771323914181057
Individual Fairness is = [0.80648379]
Disparate Impact is = 1.5281632653061226
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8017456359102244
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.07689928110772623
Individual Fairness is = [0.81521197]
Disparate Impact is = 1.658167376539657
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.800498753117207
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06130688124306326
Individual Fairness is = [0.82693267]
Disparate Impact is = 1.5750162654521795
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8254364089775561
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06473297980609566
Individual Fairness is = [0.79177057]
Disparate Impact is = 1.4757874015748031
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7892768079800498
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0677233331290808
Individual Fairness is = [0.81197007]
Disparate Impact is = 1.618248492759028
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.773067331670823
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.03942281549857096
Individual Fairness is = [0.819202]
Disparate Impact is = 1.312270196449207
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7643391521197007
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.08789557448378087
Individual Fairness is = [0.81995012]
Disparate Impact is = 1.949948324228555
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8154613466334164
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.05277611615639785
Individual Fairness is = [0.81446384]
Disparate Impact is = 1.4163449163449164
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7605985037406484
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0746293676284222
Individual Fairness is = [0.80798005]
Disparate Impact is = 1.5754317030296765
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.816708229426434
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0814553061949751
Individual Fairness is = [0.80224439]
Disparate Impact is = 1.662955686531325
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.756857855361596
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.035065807144775724
Individual Fairness is = [0.81795511]
Disparate Impact is = 1.2625439919557566
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8079800498753117
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.02413314840499306
Individual Fairness is = [0.81147132]
Disparate Impact is = 1.1689320388349513
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8216957605985037
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.09795027884430474
Individual Fairness is = [0.79501247]
Disparate Impact is = 1.8934252706707797
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7693266832917706
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06097222692060486
Individual Fairness is = [0.81072319]
Disparate Impact is = 1.4860841423948221
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8104738154613467
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.1173097152940378
Individual Fairness is = [0.8084788]
Disparate Impact is = 2.337330754352031
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7942643391521197
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0655512231599188
Individual Fairness is = [0.81421446]
Disparate Impact is = 1.5543760587238848
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7593516209476309
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.08569972575551338
Individual Fairness is = [0.81147132]
Disparate Impact is = 1.8658627464263937
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7905236907730673
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.12955537370430986
Individual Fairness is = [0.79351621]
Disparate Impact is = 2.3048076923076923
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8029925187032418
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.0431622320548688
Individual Fairness is = [0.80174564]
Disparate Impact is = 1.2972308252869373
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7917705735660848
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.012164314350046668
Individual Fairness is = [0.82693267]
Disparate Impact is = 1.088991562876657
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8079800498753117
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.05886571813034064
Individual Fairness is = [0.82967581]
Disparate Impact is = 1.5169145873320538
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8204488778054863
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.1047905592886036
Individual Fairness is = [0.7915212]
Disparate Impact is = 1.8587004163927239
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7855361596009975
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06279470198675496
Individual Fairness is = [0.80199501]
Disparate Impact is = 1.5267777777777778
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.8029925187032418
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.04737999391089612
Individual Fairness is = [0.81072319]
Disparate Impact is = 1.3777238403451997
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.830423940149626
Parse progress: |█████████████████████████████████████████████████████████| 100%
The Statistical Parity diference is = 0.06211562531714082
Individual Fairness is = [0.81620948]
Disparate Impact is = 1.524328954882924
Parse progress: |█████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
AutoML progress: |████████████████████████████████████████████████████████| 100%
gbm prediction progress: |████████████████████████████████████████████████| 100%
Accuracy 0.7942643391521197
###Markdown
LOGISTIC REGRESSION REWEIGHING
###Code
for i in range(1,51,1):
train_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Train'
train_path= os.path.join(train_url ,("Train"+ str(i)+ ".csv"))
train= pd.read_csv(train_path)
first_column = train.pop('two_year_recid')
train.insert(0, 'two_year_recid', first_column)
test_url=r'/content/gdrive/MyDrive/Datasets/SurveyData/DATASET/Violent/Test'
test_path= os.path.join(test_url ,("Test"+ str(i)+ ".csv"))
test= pd.read_csv(test_path)
first_column = test.pop('two_year_recid')
test.insert(0, 'two_year_recid', first_column)
#********************************************************binary labels for DI Remover*************************************************************
bldTrain= BinaryLabelDataset(favorable_label=1,
unfavorable_label=0,
df=train,
label_names=['two_year_recid'],
protected_attribute_names=['race'],
unprivileged_protected_attributes=[[0]],
privileged_protected_attributes=[[1]])
bldTest= BinaryLabelDataset(favorable_label=1,
unfavorable_label=0,
df=test,
label_names=['two_year_recid'],
protected_attribute_names=['race'],
unprivileged_protected_attributes=[[0]],
privileged_protected_attributes=[[1]])
#*******************************************************Reweighing instance**************************************************************
Reweigh = Reweighing(unprivileged_groups=advantagedGroup ,
privileged_groups=disadvantagedGroup )
RW_Train = Reweigh .fit_transform(bldTrain )
RW_Test = Reweigh .fit_transform(bldTest)
#getting weights from reweighing to be added to Train set
weight= pd.DataFrame(data=RW_Train.instance_weights, columns= ['weight'])
weight=h2o.H2OFrame(weight)
#*****************************************Repaired Train and Test Set*******************************************************
train= pd.DataFrame(np.hstack([RW_Train .labels,RW_Train .features]),columns=train.columns)
test= pd.DataFrame(np.hstack([RW_Test .labels,RW_Test .features]),columns= test.columns)
# TotalRepairedDF= pd.concat([RepairedTrain ,RepairedTest ])
# normalization of train and test sets
Fitter= MM.fit(train)
transformed_train=Fitter.transform(train)
train=pd.DataFrame(transformed_train, columns= train.columns)
#test normalization
transformed_test=Fitter.transform(test)
test=pd.DataFrame(transformed_test, columns= test.columns)
# *************CHECKING FAIRNESS IN DATASET**************************
## ****************CONVERTING TO BLD FORMAT******************************
#Transforming the Train and Test Set to BinaryLabel
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
# class Train(StandardDataset):
# def __init__(self,label_name= 'two_year_recid',
# favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
# super(Train, self).__init__(df=train , label_name=label_name ,
# favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
# privileged_classes=privileged_classes ,
# )
# BLD_Train= Train(protected_attribute_names= ['race'],
# privileged_classes= [[1]])
class Test(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(Test, self).__init__(df=test , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_Test= Test(protected_attribute_names= ['race'],
privileged_classes= [[1]])
## ********************Checking Bias in Data********************************
DataBias_Checker = BinaryLabelDatasetMetric(BLD_Test , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
dsp= DataBias_Checker .statistical_parity_difference()
dif= DataBias_Checker.consistency()
ddi= DataBias_Checker.disparate_impact()
print('The Statistical Parity diference is = {diff}'.format(diff= dsp ))
print('Individual Fairness is = {IF}'.format( IF= dif ))
print('Disparate Impact is = {IF}'.format( IF= ddi ))
# ********************SETTING TO H20 FRAME AND MODEL TRAINING*******************************
x = list(train.columns)
y = "two_year_recid"
x.remove(y)
Train=h2o.H2OFrame(train)
#adding weights to train set
Train= Train.cbind(weight)
Test= h2o.H2OFrame(test)
Train[y] = Train[y].asfactor()
Test[y] = Test[y].asfactor()
LogReg = H2OGeneralizedLinearEstimator(family= "binomial", lambda_ = 0)
LogReg.train(x=x, y=y, training_frame=Train,weights_column ='weight')
LogReg_Predictions= LogReg.predict(Test)
LogReg_Predictions= LogReg_Predictions.as_data_frame()
# *************************REPLACE LABELS OF DUPLICATED TEST SET WITH PREDICTIONS**************************************
predicted_df= test.copy()
predicted_df['two_year_recid']= LogReg_Predictions.predict.to_numpy()
# ***************************COMPUTE DISCRIMINATION********************************
advantagedGroup= [{'race':1}]
disadvantagedGroup= [{'race':0}]
class PredTest(StandardDataset):
def __init__(self,label_name= 'two_year_recid',
favorable_classes= [1],protected_attribute_names=['race'], privileged_classes=[[1]], ):
super(PredTest, self).__init__(df=predicted_df , label_name=label_name ,
favorable_classes=favorable_classes , protected_attribute_names=protected_attribute_names ,
privileged_classes=privileged_classes ,
)
BLD_PredTest= PredTest(protected_attribute_names= ['race'],
privileged_classes= [[1]])
excelBook= load_workbook(r'/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/LogReg/LR_Results.xlsx')
Violent= excelBook['Violent']
data= Violent.values
# Get columns
columns = next(data)[0:]
OldDF = pd.DataFrame(data, columns=columns)
ClassifierBias = ClassificationMetric( BLD_Test,BLD_PredTest , unprivileged_groups= disadvantagedGroup, privileged_groups= advantagedGroup)
Accuracy= ClassifierBias.accuracy()
TPR= ClassifierBias.true_positive_rate()
TNR= ClassifierBias.true_negative_rate()
NPV= ClassifierBias.negative_predictive_value()
PPV= ClassifierBias.positive_predictive_value()
SP=ClassifierBias .statistical_parity_difference()
IF=ClassifierBias.consistency()
DI=ClassifierBias.disparate_impact()
EOP=ClassifierBias.true_positive_rate_difference()
EO=ClassifierBias.average_odds_difference()
FDR= ClassifierBias.false_discovery_rate(privileged=False)- ClassifierBias.false_discovery_rate(privileged=True)
NPV_diff=ClassifierBias.negative_predictive_value(privileged=False)-ClassifierBias.negative_predictive_value(privileged=True)
FOR=ClassifierBias.false_omission_rate(privileged=False)-ClassifierBias.false_omission_rate(privileged=True)
PPV_diff=ClassifierBias.positive_predictive_value(privileged=False) -ClassifierBias.positive_predictive_value(privileged=True)
BGE = ClassifierBias.between_group_generalized_entropy_index()
WGE = ClassifierBias.generalized_entropy_index()-ClassifierBias.between_group_generalized_entropy_index()
BGTI = ClassifierBias.between_group_theil_index()
WGTI = ClassifierBias.theil_index() -ClassifierBias.between_group_theil_index()
EDF= ClassifierBias.differential_fairness_bias_amplification()
newdf= pd.DataFrame(index = [0], data= { 'ACCURACY': Accuracy,'TPR': TPR, 'PPV':PPV, 'TNR':TNR,'NPV':NPV,'SP':SP,'CONSISTENCY':IF,'DI':DI,'EOP':EOP,'EO':EO,'FDR':FDR,'NPV_diff':NPV_diff,
'FOR':FOR,'PPV_diff':PPV_diff,'BGEI':BGE,'WGEI':WGE,'BGTI':BGTI,'WGTI':WGTI,'EDF':EDF,
'DATA_SP':dsp,'DATA_CONS':dif,'DATA_DI':ddi})
newdf=pd.concat([OldDF,newdf])
pathway= r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/LogReg/LR_Results.xlsx"
with pd.ExcelWriter(pathway, engine='openpyxl') as writer:
#load workbook base as for writer
writer.book= excelBook
writer.sheets=dict((ws.title, ws) for ws in excelBook.worksheets)
newdf.to_excel(writer, sheet_name='Violent', index=False)
# newdf.to_excel(writer, sheet_name='Adult', index=False)
print('Accuracy', Accuracy)
###Output
_____no_output_____ |
colaboratory_test_1.ipynb | ###Markdown
###Code
#@title Default title text
print("hello")
###Output
hello
###Markdown
Ի՞նչպես եք։Այստեղ փոփոխություն եմ արել նոր ռեվիզիայում։
###Code
###Output
_____no_output_____
###Markdown
New Section
###Code
###Output
_____no_output_____ |
Assignment4_DongZheng.ipynb | ###Markdown
**4.1a**
###Code
V_P = portfolio_variance(Var, weight)
V_P = round(V_P[:, 0][0], 5)
V_P
###Output
_____no_output_____
###Markdown
**4.1b**
###Code
# beta_i = cov(Rmarket, Ri) / var(Rmarket)
# cov(aX + bY, Z) = a × cov(X, Z) + b × cov(Y, Z)
beta_1 = 0.064 / 0.04
print('Beta for Asset 1: ' + str(beta_1))
beta_2 = 0.032 / 0.04
print('Beta for Asset 2: ' + str(beta_2))
beta_P = (0.4 * 0.064 + 0.4 * 0.032) / 0.04
beta_P = round(beta_P, 5)
print('Beta for portfolio: ' + str(beta_P))
###Output
Beta for Asset 1: 1.6
Beta for Asset 2: 0.8
Beta for portfolio: 0.96
###Markdown
**4.1c**
###Code
# E[ri] − rf = βi (E[rmarket] − rf )
E_P = beta_P * (0.1 - 0.04) + 0.04
print('Expected return for portfolio is: ' + str(E_P))
###Output
Expected return for portfolio is: 0.0976
###Markdown
**4.1d**
###Code
# Refer to the attacked document for calculation details
###Output
_____no_output_____
###Markdown
**4.2a1**
###Code
data_raw = pd.read_csv('ps4_data.csv')
data_raw
data_raw.columns.values
data_raw.describe()
df = data_raw
df = df.set_index('Date')
df
df.columns
df_raw = df.sub(df['T-bills'], axis=0)
df_raw = df_raw.drop(columns=['T-bills'])
df_raw
#Portfolio 1: GM and IBM
w_1 = np.array([0.5, 0.5])
w_1 = w_1.reshape((1,-1))
w_1 = w_1.transpose()
w_1
#Portfolio 2: GM, IBM and Anheuser Busch
w_2 = np.array([1/3, 1/3, 1/3])
w_2 = w_2.reshape((1,-1))
w_2 = w_2.transpose()
w_2
#Portfolio 3: GM, Toyota, IBM and Anheuser Busch
w_3 = np.array([1/4, 1/4, 1/4, 1/4])
w_3 = w_3.reshape((1,-1))
w_3 = w_3.transpose()
w_3
#Portfolio 4: 'TOYOTA', 'GM', 'BMW', 'FORD', 'CHRYSLER', 'APPLE', 'IBM', 'COMPAQ', 'HP',
#'BUSCH','HEINEKEN', 'KIRIN', 'MOLSON'
w_4 = np.ones(13)
w_4 /= 13
w_4 = w_4.reshape((1,-1))
w_4 = w_4.transpose()
w_4
df = df_raw/100
df
describe = df.describe()
describe.loc['mean']
mu_temp = np.array([[describe.loc['mean','GM'], describe.loc['mean','IBM']]])
mu_1 = mu_temp @ w_1
covar_1 = df[['GM', 'IBM']].cov()
var_1 = portfolio_variance(covar_1, w_1)
print('Average monthly excess return for Portfolio 1: ' + str(round(mu_1[:, 0][0] * 100, 5)) + '%')
print('Variance of monthly excess return for Portfolio 1: ' + str(round(var_1.iloc[0,0], 5)))
mu_temp = np.array([[describe.loc['mean','GM'], describe.loc['mean','IBM'], describe.loc['mean','BUSCH']]])
mu_2 = mu_temp @ w_2
covar_2 = df[['GM', 'IBM', 'BUSCH']].cov()
var_2 = portfolio_variance(covar_2, w_2)
print('Average monthly excess return for Portfolio 2: ' + str(round(mu_2[:, 0][0]*100, 5)) + '%')
print('Variance of monthly excess return for Portfolio 2: ' + str(round(var_2.iloc[0,0], 5)))
mu_temp = np.array([[describe.loc['mean','GM'], describe.loc['mean','IBM'], describe.loc['mean','BUSCH'], describe.loc['mean','TOYOTA']]])
mu_3 = mu_temp @ w_3
covar_3 = df[['GM', 'IBM', 'BUSCH','TOYOTA']].cov()
var_3 = portfolio_variance(covar_3, w_3)
print('Average monthly excess return for Portfolio 3: ' + str(round(mu_3[:, 0][0]*100, 5)) + '%')
print('Variance of monthly excess return for Portfolio 3: ' + str(round(var_3.iloc[0,0], 5)))
mu_temp = np.array([describe.iloc[1,-13:]])
mu_4 = mu_temp @ w_4
covar_4 = df.iloc[:,-13:].cov()
var_4 = portfolio_variance(covar_4, w_4)
print('Average monthly excess return for Portfolio 4: ' + str(round(mu_4[:, 0][0]*100, 5)) + '%')
print('Variance of monthly excess return for Portfolio 4: ' + str(round(var_4.iloc[0,0], 5)))
###Output
Average monthly excess return for Portfolio 4: 1.22384%
Variance of monthly excess return for Portfolio 4: 0.00251
###Markdown
**4.2a2**
###Code
df.iloc[:,-13:]
from sklearn.linear_model import LinearRegression
Y_1 = (df['IBM'] * 0.5).add(0.5 * df['GM'], axis=0)
X_1 = df[['Market: World']]
reg_1 = LinearRegression().fit(X_1, Y_1)
print('Portfolio 1 Beta from linear reg: ' + str(reg_1.coef_[0]))
# beta_i = cov(Rmarket, Ri) / var(Rmarket)
# cov(aX + bY, Z) = a × cov(X, Z) + b × cov(Y, Z)
covar_P1 = df[['GM', 'IBM', 'Market: World']].cov()
beta_P1 = (0.5 * covar_P1.iloc[0,2] + 0.5 * covar_P1.iloc[1,2]) / covar_P1.iloc[2,2]
print('Portfolio 1 Beta from definition: ' + str(beta_P1))
(df['IBM'] * 0.5).add(0.5 * df['GM'], axis=0)
X_1
covar_P1 = df[['GM', 'IBM', 'Market: World']].cov()
covar_P1
Y_2 = (df['IBM'] / 3).add(df['GM'] / 3, axis=0).add(df['BUSCH'] / 3, axis=0)
X_2 = df[['Market: World']]
reg_2 = LinearRegression().fit(X_2, Y_2)
print('Portfolio 2 Beta from linear reg: ' + str(reg_2.coef_[0]))
# beta_i = cov(Rmarket, Ri) / var(Rmarket)
# cov(aX + bY, Z) = a × cov(X, Z) + b × cov(Y, Z)
covar_P2 = df[['GM', 'IBM', 'BUSCH', 'Market: World']].cov()
beta_P2 = ( covar_P2.iloc[0,3] / 3 + covar_P2.iloc[1,3] / 3 + covar_P2.iloc[2,3] / 3) / covar_P2.iloc[3,3]
print('Portfolio 2 Beta from definition: ' + str(beta_P2))
Y_3 = (df['IBM'] / 4).add(df['GM'] / 4, axis=0).add(df['BUSCH'] / 4, axis=0).add(df['TOYOTA'] / 4, axis=0)
X_3 = df[['Market: World']]
reg_3 = LinearRegression().fit(X_3, Y_3)
print('Portfolio 3 Beta from linear reg: ' + str(reg_3.coef_[0]))
# beta_i = cov(Rmarket, Ri) / var(Rmarket)
# cov(aX + bY, Z) = a × cov(X, Z) + b × cov(Y, Z)
covar_P3 = df[['GM', 'IBM', 'BUSCH', 'TOYOTA', 'Market: World']].cov()
beta_P3 = ( covar_P3.iloc[0,4] / 4 + covar_P3.iloc[1,4] / 4 + covar_P3.iloc[2,4] / 4 + covar_P3.iloc[3,4] / 4) / covar_P3.iloc[4,4]
print('Portfolio 3 Beta from definition: ' + str(beta_P3))
Y_4 = (df.iloc[:,-13:] / 13).sum(axis=1)
X_4 = df[['Market: World']]
reg_4 = LinearRegression().fit(X_4, Y_4)
print('Portfolio 4 Beta from linear reg: ' + str(reg_4.coef_[0]))
# beta_i = cov(Rmarket, Ri) / var(Rmarket)
# cov(aX + bY, Z) = a × cov(X, Z) + b × cov(Y, Z)
covar_P4 = pd.concat([df.iloc[:,-13:], df[['Market: World']]], axis=1).cov()
beta_P4 = ( covar_P4.iloc[:-1,13].sum() / 13) / covar_P4.iloc[13,13]
print('Portfolio 4 Beta from definition: ' + str(beta_P4))
###Output
Portfolio 4 Beta from linear reg: 0.9312077484900599
Portfolio 4 Beta from definition: 0.9312077484900597
###Markdown
**4.2a3**
###Code
# Refer to the attacked document for discussion details
###Output
_____no_output_____
###Markdown
**4.2b1**
###Code
reg_list = []
for i in range(13):
Y_each = df.iloc[:,-13+i]
X_each = df[['Market: World']]
reg_each = LinearRegression().fit(X_each, Y_each)
reg_list.append(reg_each.coef_[0])
print('Beta of each 13 common stock from linear reg: ' + str(reg_each.coef_[0]))
df_reg = pd.DataFrame(data={'Beta': reg_list})
df_reg['Stock Name'] = df.iloc[:,-13:].columns.values
df_reg = df_reg.set_index('Stock Name')
df_reg
beta_P1_new = sum(df_reg.loc[['GM','IBM'],'Beta'].tolist())/2
print('Portfolio 1 Beta from definition: ' + str(beta_P1))
print('Portfolio 1 Beta from weighted average of each asset: ' + str(beta_P1_new))
beta_P2_new = sum(df_reg.loc[['GM','IBM','BUSCH'],'Beta'].tolist())/3
print('Portfolio 2 Beta from definition: ' + str(beta_P2))
print('Portfolio 2 Beta from weighted average of each asset: ' + str(beta_P2_new))
beta_P3_new = sum(df_reg.loc[['GM','IBM','BUSCH','TOYOTA'],'Beta'].tolist())/4
print('Portfolio 3 Beta from definition: ' + str(beta_P3))
print('Portfolio 3 Beta from weighted average of each asset: ' + str(beta_P3_new))
beta_P4_new = sum(df_reg.iloc[:,0].tolist())/13
print('Portfolio 4 Beta from definition: ' + str(beta_P4))
print('Portfolio 4 Beta from weighted average of each asset: ' + str(beta_P4_new))
###Output
Portfolio 4 Beta from definition: 0.9312077484900597
Portfolio 4 Beta from weighted average of each asset: 0.9312077484900596
###Markdown
**4.2b2**
###Code
capm_expected_return_list = []
for i in range(13):
capm_expected_return_list.append(reg_list[i] * describe.loc['mean','Market: World'])
capm_expected_return_list
describe.iloc[1,-13:].tolist()
plt.figure(figsize=(8, 5))
plt.xlabel('Common Stock')
plt.ylabel('Return')
plt.xticks(rotation=90)
plt.plot(df.iloc[:,-13:].columns.values, describe.iloc[1,-13:].tolist(),'bo', markersize=10, label='Sample Average')
plt.plot(df.iloc[:,-13:].columns.values, capm_expected_return_list,'ro', markersize=10, label='CAPM Expected')
plt.legend(bbox_to_anchor=(0, 1), loc='upper left', ncol=1)
plt.show()
###Output
_____no_output_____
###Markdown
**3a**
###Code
prob = 10 * pow(0.5, 9) * 0.5 + pow(0.5, 10)
print('Probably of any given manager achieves: ' + str(prob))
i=1
prob_best=0
for i in range(1,501):
prob_best += pow(prob, i)* pow((1 - prob), 500-i) * (math.factorial(500)/(math.factorial(i)*math.factorial(500-i)))
print('Probably of at least 1 out of 500 manager achieves: ' + str(prob_best))
###Output
Probably of at least 1 out of 500 manager achieves: 0.9954840995436917
|
materiaali/harjoitukset/02_HistogramminPiirto.ipynb | ###Markdown
2. Histogrammin piirto Tässä harjoituksessa opetellaan piirtämään invariantin massan histogrammi Pythonilla. Käytetään datana CMS-kokeen vuonna 2011 keräämää dataa kahden protonin törmäyksistä [1]. Tässä harjoituksessa käytettävään CSV-tiedostoon on karsittu edellä mainitusta datasta kiinnostavia tapahtumia, joissa hiukkasilmaisin on havainnut kaksi myonia, joiden invariantti massa on välillä 8–12 GeV [2]. 1) Tilanteen alustus Aloitetaan tuomalla tarvittavat moduulit ja lukemalla datatiedosto. > - Tuo moduulit **pandas**, **numpy** ja **matplotlib.pyplot** _import_ -komennolla> - Käytä moduuleissa lyhenteitä *pd*, *np* ja *plt*, jotta moduulin koko nimeä ei tarvitse kirjoittaa aina uudelleen.$\color{purple}{\text{Kirjoita koodi alle.}}$
###Code
# Tuo tarvittavat moduulit import-komennolla
###Output
_____no_output_____
###Markdown
> - Lue datatiedosto pandas-moduulin *read_csv* -metodilla ja tallenna tiedot muuttujaan _datasetti_.> - Datatiedoston polku on '[https://raw.githubusercontent.com/cms-opendata-education/cms-jupyter-materials-finnish/master/Data/Ymumu_Run2011A.csv](https://raw.githubusercontent.com/cms-opendata-education/cms-jupyter-materials-finnish/master/Data/Ymumu_Run2011A.csv)'$\color{purple}{\text{Kirjoita koodi alle.}}$
###Code
# Lue datatiedosto ja tallenna data muuttujaan 'datasetti'.
###Output
_____no_output_____
###Markdown
> Mitä tallennettu tiedosto sisältää? Tarkista tulostamalla ainakin viisi ensimmäistä riviä. $\color{purple}{\text{Kirjoita koodi alle.}}$
###Code
# Tulosta tiedoston viisi ensimmäistä riviä (vinkki: käytä head() -metodia)
###Output
_____no_output_____
###Markdown
Huomaamme, että tässä datatiedostossa on laskettu invariantti massa jo valmiiksi, joten käytetään valmiiksi laskettua massan arvoa. > - Tallenna datasetistä sarake, joka sisältää invariantin massan muuttujaan invariantti_massa.> - Vinkki: Harjoituksessa 1 valittiin datasetistä sarakkeita, kuten _eta1_ tai _psi2_. Voit tallentaa invariantin massan sarakkeen samaan tapaan.$\color{purple}{\text{Kirjoita koodi alle.}}$
###Code
# Tallenna invariantti massa muuttujaan invariantti_massa
###Output
_____no_output_____
###Markdown
> - Selvitä len() -funktion avulla, kuinka monta invariantin massan arvoa muuttujaan invariantti_massa on tallennettu.> - len() -funktiossa sulkeiden sisään laitetaan lista, tässä tapauksessa muuttujamme *invariantti_massa* ja funktio laskee, kuinka monta arvoa listassa on.$\color{purple}{\text{Kirjoita koodi alle.}}$
###Code
# Selvitä len() -funktion avulla, kuinka monta invariantin massan arvoa data sisältää.
###Output
_____no_output_____
###Markdown
2) Histogrammin piirtäminen Piirretään seuraavaksi histogrammi invariantin massan arvoista. Histogrammin piirtoon hyödynnämme aiemmin tuomaamme matplotlib.pyplot-moduulia. Histogrammi on pylväskaavio, joka tässä tapauksessa kuvaa sitä, kuinka monta törmäystapahtumaa on osunut kunkin invariantin massan arvon kohdalle. > $\color{purple}{\text{Piirrä histogrammi käyttäen seuraavia komentoja:}}$- **plt.hist**(*muuttuja*, *bins=pylväiden lukumäärä*), korvaa funktiossa muuttuja invariantilla massalla ja pylväiden lukumäärä haluamallasi lukumäärällä. Kokeile, miltä histogrammi näyttää eri pylväiden lukumäärillä, esim 50 tai 500.- **plt.title**(), histogrammin otsikko lainausmerkeissä- **plt.xlabel**() ja plt.ylabel(), x- ja y-akseleiden otsikot lainausmerkeissä- **plt.show**(), lisää tämä komento loppuun, jotta histogrammi näkyy näytöllä
###Code
# Piirrä histogrammi
###Output
_____no_output_____ |
lessons/NLP and ML pipelines/cleaning_practice.ipynb | ###Markdown
Cleaning Quiz: Udacity's Course CatalogIt's your turn! Udacity's [course catalog page](https://www.udacity.com/courses/all) has changed since the last video was filmed. One notable change is the introduction of _schools_.In this activity, you're going to perform similar actions with BeautifulSoup to extract the following information from each course listing on the page:1. The course name - e.g. "Data Analyst"2. The school the course belongs to - e.g. "School of Data Science"**Note: All solution notebooks can be found by clicking on the Jupyter icon on the top left of this workspace.** Step 1: Get text from Udacity's course catalog web pageYou can use the `requests` library to do this.Outputting all the javascript, CSS, and text may overload the space available to load this notebook, so we omit a print statement here.
###Code
# import statements
import requests
from bs4 import BeautifulSoup
# fetch web page
r = requests.get("https://www.udacity.com/courses/all")
###Output
_____no_output_____
###Markdown
Step 2: Use BeautifulSoup to remove HTML tagsUse `"lxml"` rather than `"html5lib"`.Again, printing this entire result may overload the space available to load this notebook, so we omit a print statement here.
###Code
soup = BeautifulSoup(r.text, 'lxml')
###Output
_____no_output_____
###Markdown
Step 3: Find all course summariesUse the BeautifulSoup's `find_all` method to select based on tag type and class name. Just like in the video, you can right click on the item, and click "Inspect" to view its html on a web page.
###Code
# Find all course summaries
summaries = soup.find_all('div', class_="course-summary-card row row-gap-medium catalog-card nanodegree-card ng-star-inserted")
print('Number of Courses:', len(summaries))
###Output
Number of Courses: 43
###Markdown
Step 4: Inspect the first summary to find selectors for the course name and schoolTip: `.prettify()` is a super helpful method BeautifulSoup provides to output html in a nicely indented form! Make sure to use `print()` to ensure whitespace is displayed properly.
###Code
# print the first summary in summaries
print(summaries[0].prettify())
###Output
<div _ngcontent-sc204="" class="course-summary-card row row-gap-medium catalog-card nanodegree-card ng-star-inserted">
<ir-catalog-card _ngcontent-sc204="" _nghost-sc207="">
<div _ngcontent-sc207="" class="card-wrapper is-collapsed">
<div _ngcontent-sc207="" class="card__inner card mb-0">
<div _ngcontent-sc207="" class="card__inner--upper">
<div _ngcontent-sc207="" class="image_wrapper hidden-md-down">
<a _ngcontent-sc207="" href="/course/product-manager-nanodegree--nd036">
<!-- -->
<div _ngcontent-sc207="" class="image-container ng-star-inserted" style="background-image:url(https://d20vrrgs8k4bvw.cloudfront.net/images/degrees/nd036/catalog+image+nd036.jpg);">
<div _ngcontent-sc207="" class="image-overlay">
</div>
</div>
</a>
<!-- -->
</div>
<div _ngcontent-sc207="" class="card-content">
<!-- -->
<span _ngcontent-sc207="" class="tag tag--new card ng-star-inserted">
New
</span>
<!-- -->
<div _ngcontent-sc207="" class="category-wrapper">
<span _ngcontent-sc207="" class="mobile-icon">
</span>
<!-- -->
<h4 _ngcontent-sc207="" class="category ng-star-inserted">
School of Business
</h4>
</div>
<h3 _ngcontent-sc207="" class="card-heading">
<a _ngcontent-sc207="" class="capitalize" href="/course/product-manager-nanodegree--nd036">
Product Manager
</a>
</h3>
<div _ngcontent-sc207="" class="right-sub">
<!-- -->
<div _ngcontent-sc207="" class="skills ng-star-inserted">
<h4 _ngcontent-sc207="">
Skills Covered
</h4>
<span _ngcontent-sc207="" class="truncate-content">
<!-- -->
<span _ngcontent-sc207="" class="ng-star-inserted">
Product Strategy,
</span>
<span _ngcontent-sc207="" class="ng-star-inserted">
Product Design,
</span>
<span _ngcontent-sc207="" class="ng-star-inserted">
Product Development,
</span>
<span _ngcontent-sc207="" class="ng-star-inserted">
Design Sprint,
</span>
<span _ngcontent-sc207="" class="ng-star-inserted">
Product Launch
</span>
</span>
</div>
<!-- -->
<div _ngcontent-sc207="" class="hidden-md-up level">
<span _ngcontent-sc207="" class="course-level course-level-beginner" classname="course-level course-level-beginner">
</span>
<span _ngcontent-sc207="" class="capitalize">
beginner
</span>
</div>
</div>
</div>
</div>
<div _ngcontent-sc207="" class="card__inner--lower hidden-sm-down">
<div _ngcontent-sc207="" class="left uppercase blue expander pointer">
<!-- -->
<span _ngcontent-sc207="" class="ng-star-inserted">
Program Details
</span>
<!-- -->
</div>
<div _ngcontent-sc207="" class="right">
<!-- -->
<span _ngcontent-sc207="" class="caption text-right level ng-star-inserted">
<span _ngcontent-sc207="" class="course-level course-level-beginner" classname="course-level course-level-beginner">
</span>
<span _ngcontent-sc207="" class="capitalize">
beginner
</span>
</span>
</div>
</div>
</div>
<div _ngcontent-sc207="" class="card__expander">
<div _ngcontent-sc207="" class="card__expander--summary mb-1">
<!-- -->
<span _ngcontent-sc207="" class="ng-star-inserted">
Envision and execute the development of industry-defining products, and learn how to successfully bring them to market.
</span>
</div>
<hr _ngcontent-sc207=""/>
<div _ngcontent-sc207="" class="card__expander--details">
<div _ngcontent-sc207="" class="rating">
<!-- -->
</div>
<a _ngcontent-sc207="" class="button--primary btn" href="/course/product-manager-nanodegree--nd036">
Learn More
</a>
</div>
</div>
</div>
</ir-catalog-card>
<!-- -->
</div>
###Markdown
Look for selectors that contain the courses title and school name text you want to extract. Then, use the `select_one` method on the summary object to pull out the html with those selectors. Afterwards, don't forget to do some extra cleaning to isolate the names (get rid of unnecessary html), as you saw in the last video.
###Code
# Extract course title
summaries[0].select_one("h3 a").get_text().strip()
# Extract school
summaries[0].select_one("h4").get_text().strip()
###Output
_____no_output_____
###Markdown
Step 5: Collect names and schools of ALL course listingsReuse your code from the previous step, but now in a loop to extract the name and school from every course summary in `summaries`!
###Code
courses = []
for summary in summaries:
# append name and school of each summary to courses list
name = summary.select_one("h3 a").get_text().strip()
school = summary.select_one("h4").get_text().strip()
courses.append((name,school))
# display results
print(len(courses), "course summaries found. Sample:")
courses[:20]
###Output
43 course summaries found. Sample:
|
tests/keras/rnn_in_keras.ipynb | ###Markdown
Recurrent Neural Network in Keras In this notebook, we use an RNN to classify IMDB movie reviews by their sentiment.
###Code
!pip install watermark
!pip install nltk
!pip install theano
!pip install mxnet
!pip install chainer
!pip install seaborn
!pip install keras
!pip install scikit-image
!pip install tqdm
!pip install tflearn
!pip install h5py
###Output
Collecting watermark
Downloading watermark-1.6.0-py3-none-any.whl
Requirement already satisfied: ipython in /srv/venv/lib/python3.6/site-packages (from watermark)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: decorator in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: pickleshare in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: pexpect; sys_platform != "win32" in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: jedi>=0.10 in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: setuptools>=18.5 in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: traitlets>=4.2 in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: simplegeneric>0.8 in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: pygments in /srv/venv/lib/python3.6/site-packages (from ipython->watermark)
Requirement already satisfied: six>=1.9.0 in /srv/venv/lib/python3.6/site-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->watermark)
Requirement already satisfied: wcwidth in /srv/venv/lib/python3.6/site-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->watermark)
Requirement already satisfied: ptyprocess>=0.5 in /srv/venv/lib/python3.6/site-packages (from pexpect; sys_platform != "win32"->ipython->watermark)
Requirement already satisfied: parso==0.1.1 in /srv/venv/lib/python3.6/site-packages (from jedi>=0.10->ipython->watermark)
Requirement already satisfied: ipython-genutils in /srv/venv/lib/python3.6/site-packages (from traitlets>=4.2->ipython->watermark)
Installing collected packages: watermark
Successfully installed watermark-1.6.0
Collecting nltk
Downloading nltk-3.2.5.tar.gz (1.2MB)
[K 100% |████████████████████████████████| 1.2MB 1.1MB/s ta 0:00:01
[?25hRequirement already satisfied: six in /srv/venv/lib/python3.6/site-packages (from nltk)
Building wheels for collected packages: nltk
Running setup.py bdist_wheel for nltk ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/18/9c/1f/276bc3f421614062468cb1c9d695e6086d0c73d67ea363c501
Successfully built nltk
Installing collected packages: nltk
Successfully installed nltk-3.2.5
Collecting theano
Downloading Theano-1.0.1.tar.gz (2.8MB)
[K 100% |████████████████████████████████| 2.8MB 473kB/s eta 0:00:01
[?25hRequirement already satisfied: numpy>=1.9.1 in /srv/venv/lib/python3.6/site-packages (from theano)
Requirement already satisfied: scipy>=0.14 in /srv/venv/lib/python3.6/site-packages (from theano)
Requirement already satisfied: six>=1.9.0 in /srv/venv/lib/python3.6/site-packages (from theano)
Building wheels for collected packages: theano
Running setup.py bdist_wheel for theano ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/46/a2/7d/b4cac381d5151daa9f9e0b3e4e4b65edaea6355ae296c97cf2
Successfully built theano
Installing collected packages: theano
Successfully installed theano-1.0.1
Collecting mxnet
Downloading mxnet-1.0.0.post4-py2.py3-none-manylinux1_x86_64.whl (27.4MB)
[K 100% |████████████████████████████████| 27.5MB 48kB/s eta 0:00:01 1% |▍ | 327kB 5.7MB/s eta 0:00:05
[?25hRequirement already satisfied: numpy<=1.13.3 in /srv/venv/lib/python3.6/site-packages (from mxnet)
Collecting requests==2.18.4 (from mxnet)
Downloading requests-2.18.4-py2.py3-none-any.whl (88kB)
[K 100% |████████████████████████████████| 92kB 9.8MB/s eta 0:00:01
[?25hCollecting graphviz==0.8.1 (from mxnet)
Downloading graphviz-0.8.1-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests==2.18.4->mxnet)
Downloading certifi-2018.1.18-py2.py3-none-any.whl (151kB)
[K 100% |████████████████████████████████| 153kB 7.6MB/s eta 0:00:01
[?25hCollecting urllib3<1.23,>=1.21.1 (from requests==2.18.4->mxnet)
Downloading urllib3-1.22-py2.py3-none-any.whl (132kB)
[K 100% |████████████████████████████████| 133kB 8.6MB/s eta 0:00:01
[?25hCollecting chardet<3.1.0,>=3.0.2 (from requests==2.18.4->mxnet)
Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB)
[K 100% |████████████████████████████████| 143kB 8.6MB/s eta 0:00:01
[?25hCollecting idna<2.7,>=2.5 (from requests==2.18.4->mxnet)
Downloading idna-2.6-py2.py3-none-any.whl (56kB)
[K 100% |████████████████████████████████| 61kB 10.2MB/s ta 0:00:01
[?25hInstalling collected packages: certifi, urllib3, chardet, idna, requests, graphviz, mxnet
Successfully installed certifi-2018.1.18 chardet-3.0.4 graphviz-0.8.1 idna-2.6 mxnet-1.0.0.post4 requests-2.18.4 urllib3-1.22
Collecting chainer
Downloading chainer-3.3.0.tar.gz (349kB)
[K 100% |████████████████████████████████| 358kB 2.4MB/s ta 0:00:01
[?25hCollecting filelock (from chainer)
Downloading filelock-3.0.4.tar.gz
Requirement already satisfied: numpy>=1.9.0 in /srv/venv/lib/python3.6/site-packages (from chainer)
Requirement already satisfied: protobuf>=3.0.0 in /srv/venv/lib/python3.6/site-packages (from chainer)
Requirement already satisfied: six>=1.9.0 in /srv/venv/lib/python3.6/site-packages (from chainer)
Requirement already satisfied: setuptools in /srv/venv/lib/python3.6/site-packages (from protobuf>=3.0.0->chainer)
Building wheels for collected packages: chainer, filelock
Running setup.py bdist_wheel for chainer ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/0e/a8/da/5dab1d9722577de0d77f1f562961264ee89f47ddfe6f52188c
Running setup.py bdist_wheel for filelock ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/5f/5e/8a/9f1eb481ffbfff95d5f550570c1dbeff3c1785c8383c12c62b
Successfully built chainer filelock
Installing collected packages: filelock, chainer
Successfully installed chainer-3.3.0 filelock-3.0.4
Collecting seaborn
Downloading seaborn-0.8.1.tar.gz (178kB)
[K 100% |████████████████████████████████| 184kB 3.5MB/s ta 0:00:01
[?25hBuilding wheels for collected packages: seaborn
Running setup.py bdist_wheel for seaborn ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/29/af/4b/ac6b04ec3e2da1a450e74c6a0e86ade83807b4aaf40466ecda
Successfully built seaborn
Installing collected packages: seaborn
Successfully installed seaborn-0.8.1
Collecting keras
Downloading Keras-2.1.3-py2.py3-none-any.whl (319kB)
[K 100% |████████████████████████████████| 327kB 2.5MB/s ta 0:00:01
[?25hRequirement already satisfied: six>=1.9.0 in /srv/venv/lib/python3.6/site-packages (from keras)
Requirement already satisfied: scipy>=0.14 in /srv/venv/lib/python3.6/site-packages (from keras)
Requirement already satisfied: numpy>=1.9.1 in /srv/venv/lib/python3.6/site-packages (from keras)
Requirement already satisfied: pyyaml in /srv/venv/lib/python3.6/site-packages (from keras)
Installing collected packages: keras
Successfully installed keras-2.1.3
Collecting scikit-image
Downloading scikit_image-0.13.1-cp36-cp36m-manylinux1_x86_64.whl (35.8MB)
[K 100% |████████████████████████████████| 35.8MB 37kB/s eta 0:00:01 11% |███▉ | 4.2MB 19.9MB/s eta 0:00:02 20% |██████▌ | 7.3MB 32.0MB/s eta 0:00:01
[?25hRequirement already satisfied: pillow>=2.1.0 in /srv/venv/lib/python3.6/site-packages (from scikit-image)
Requirement already satisfied: scipy>=0.17.0 in /srv/venv/lib/python3.6/site-packages (from scikit-image)
Requirement already satisfied: matplotlib>=1.3.1 in /srv/venv/lib/python3.6/site-packages (from scikit-image)
Collecting networkx>=1.8 (from scikit-image)
Downloading networkx-2.1.zip (1.6MB)
[K 100% |████████████████████████████████| 1.6MB 882kB/s eta 0:00:01
[?25hRequirement already satisfied: six>=1.7.3 in /srv/venv/lib/python3.6/site-packages (from scikit-image)
Collecting PyWavelets>=0.4.0 (from scikit-image)
Downloading PyWavelets-0.5.2-cp36-cp36m-manylinux1_x86_64.whl (5.7MB)
###Markdown
Load dependencies
###Code
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, SpatialDropout1D
from keras.layers import SimpleRNN # new!
from keras.callbacks import ModelCheckpoint
import os
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Set hyperparameters
###Code
# output directory name:
output_dir = 'model_output/rnn'
# training:
epochs = 16 # way more!
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 10000
max_review_length = 100 # lowered due to vanishing gradient over time
pad_type = trunc_type = 'pre'
drop_embed = 0.2
# RNN layer architecture:
n_rnn = 256
drop_rnn = 0.2
# dense layer architecture:
# n_dense = 256
# dropout = 0.2
###Output
_____no_output_____
###Markdown
Load data
###Code
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words) # removed n_words_to_skip
###Output
Downloading data from https://s3.amazonaws.com/text-datasets/imdb.npz
17465344/17464789 [==============================] - 2s 0us/step
###Markdown
Preprocess data
###Code
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(SpatialDropout1D(drop_embed))
model.add(SimpleRNN(n_rnn, dropout=drop_rnn))
# model.add(Dense(n_dense, activation='relu')) # typically don't see top dense layer in NLP like in
# model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 100, 64) 640000
_________________________________________________________________
spatial_dropout1d_1 (Spatial (None, 100, 64) 0
_________________________________________________________________
simple_rnn_1 (SimpleRNN) (None, 256) 82176
_________________________________________________________________
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 722,433
Trainable params: 722,433
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
###Output
_____no_output_____
###Markdown
Train!
###Code
# 80.6% validation accuracy in epoch 4
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.6999 - acc: 0.5173 - val_loss: 0.6920 - val_acc: 0.5230
Epoch 2/16
25000/25000 [==============================] - 60s 2ms/step - loss: 0.6928 - acc: 0.5290 - val_loss: 0.6821 - val_acc: 0.5558
Epoch 3/16
25000/25000 [==============================] - 55s 2ms/step - loss: 0.6628 - acc: 0.5933 - val_loss: 0.6642 - val_acc: 0.5754
Epoch 4/16
25000/25000 [==============================] - 57s 2ms/step - loss: 0.5377 - acc: 0.7295 - val_loss: 0.5721 - val_acc: 0.7005
Epoch 5/16
25000/25000 [==============================] - 60s 2ms/step - loss: 0.4192 - acc: 0.8158 - val_loss: 0.5206 - val_acc: 0.7461
Epoch 6/16
25000/25000 [==============================] - 55s 2ms/step - loss: 0.3935 - acc: 0.8310 - val_loss: 0.4553 - val_acc: 0.8154
Epoch 7/16
25000/25000 [==============================] - 60s 2ms/step - loss: 0.4051 - acc: 0.8194 - val_loss: 0.4691 - val_acc: 0.7858
Epoch 8/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.3443 - acc: 0.8612 - val_loss: 0.4900 - val_acc: 0.7880
Epoch 9/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.3393 - acc: 0.8637 - val_loss: 0.4625 - val_acc: 0.7946
Epoch 10/16
25000/25000 [==============================] - 61s 2ms/step - loss: 0.3133 - acc: 0.8782 - val_loss: 0.4730 - val_acc: 0.8034
Epoch 11/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.3041 - acc: 0.8784 - val_loss: 0.5378 - val_acc: 0.7970
Epoch 12/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.5194 - acc: 0.7259 - val_loss: 0.6488 - val_acc: 0.6063
Epoch 13/16
25000/25000 [==============================] - 60s 2ms/step - loss: 0.5333 - acc: 0.7276 - val_loss: 0.6261 - val_acc: 0.6363
Epoch 14/16
25000/25000 [==============================] - 56s 2ms/step - loss: 0.4273 - acc: 0.8064 - val_loss: 0.5242 - val_acc: 0.7890
Epoch 15/16
25000/25000 [==============================] - 58s 2ms/step - loss: 0.3760 - acc: 0.8392 - val_loss: 0.5559 - val_acc: 0.7827
Epoch 16/16
25000/25000 [==============================] - 62s 2ms/step - loss: 0.3873 - acc: 0.8296 - val_loss: 0.5056 - val_acc: 0.7837
###Markdown
Evaluate
###Code
model.load_weights(output_dir+"/weights.03.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
"{:0.2f}".format(roc_auc_score(y_valid, y_hat)*100.0)
test complete; Gopal
###Output
_____no_output_____ |
Chapter1_Tensors/Linear_Algebra_solution.ipynb | ###Markdown
Linear Algebra
###Code
from __future__ import print_function
import torch
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/pytorch_exercises"
torch.__version__
np.__version__
###Output
_____no_output_____
###Markdown
NOTE on notation _x, _y, _z, ...: NumPy 0-d or 1-d arrays _X, _Y, _Z, ...: NumPy 2-d or higer dimensional arrays x, y, z, ...: 0-d or 1-d tensors X, Y, Z, ...: 2-d or higher dimensional tensors Matrix and vector products Q1. Compute the inner product of two vectors x and y.
###Code
x = torch.Tensor([1, 2])
y = torch.Tensor([3, 4])
z = x.matmul(y)
print(z)
# = 1*3 + 2*4
assert z==x.dot(y)
###Output
11.0
###Markdown
Q2. Compute the product of vector x and matrix Y.
###Code
x = torch.Tensor([1, 2])
Y = torch.Tensor([[0, 0], [1, 1]])
z = x.matmul(Y)
print(z)
assert torch.equal(z, x.unsqueeze(0).mm(Y).squeeze()) is True
###Output
2
2
[torch.FloatTensor of size 2]
###Markdown
Q3. Compute a matrix-vector product of matrix X and vector y.
###Code
X = torch.Tensor([[1, 2], [3, 4]])
y = torch.Tensor([3, 4])
z = X.matmul(y)
print(z)
assert torch.equal(z, X.mv(y)) is True
###Output
11
25
[torch.FloatTensor of size 2]
###Markdown
Q4. Compute a matrix multiplication of matrix X and Y.
###Code
X = torch.Tensor([[1, 2], [3, 4]])
Y = torch.Tensor([[0, 0], [1, 1]])
Z = X.matmul(Y)
print(Z)
assert torch.equal(Z, X.mm(Y)) is True
###Output
2 2
4 4
[torch.FloatTensor of size 2x2]
###Markdown
Q5. Compute a batch matrix multiplication of tensor X and Y.
###Code
X = torch.randn(3, 4, 5)
Y = torch.randn(3, 5, 6)
Z = X.matmul(Y)
print(Z.size())
assert torch.equal(Z, X.bmm(Y)) is True
###Output
torch.Size([3, 4, 6])
###Markdown
Q6. Express the below computation as a single line.`M + x⊗y`
###Code
x = torch.Tensor([1, 2])
y = torch.Tensor([3, 4])
M = torch.ones(2, 2)
Z = M.addr(x, y)
print(Z)
###Output
4 5
7 9
[torch.FloatTensor of size 2x2]
###Markdown
Q7.Express the below computation as a single line.`m + torch.mv(X, y)`
###Code
X = torch.Tensor([[1, 2], [3, 4]])
y = torch.Tensor([3, 4])
m = torch.ones(2)
print(m.addmv(X, y))
###Output
12
26
[torch.FloatTensor of size 2]
###Markdown
Q8.Express the below computation as a single line.`M + torch.mm(X, Y)`
###Code
X = torch.Tensor([[1, 2], [3, 4]])
Y = torch.Tensor([[0, 0], [1, 1]])
M = torch.ones(2, 2)
Z = M.addmm(X, Y)
print(Z)
###Output
3 3
5 5
[torch.FloatTensor of size 2x2]
###Markdown
Q9. Express the below computation as a single line.``M + torch.sum(torch.bmm(X, Y), 0)``
###Code
X = torch.randn(10, 3, 4)
Y = torch.randn(10, 4, 5)
M = torch.ones(3, 5)
Z = M.addbmm(X, Y)
print(Z, Z.size())
###Output
-0.3284 1.0236 -0.3820 -1.3869 0.8295
-7.2099 -2.7595 1.6010 6.4944 1.4375
-3.2925 -8.0896 0.9989 -0.1702 3.2710
[torch.FloatTensor of size 3x5]
torch.Size([3, 5])
###Markdown
Q10. Express the below computation as a single line.`M + torch.bmm(X, Y)`
###Code
X = torch.randn(10, 3, 4)
Y = torch.randn(10, 4, 5)
M = torch.ones(3, 5)
Z = M.baddbmm(X, Y) # M is broadcasted to X * Y
print(Z.size())
###Output
torch.Size([10, 3, 5])
###Markdown
Decompositions Q11. Compute the upper trianglular matrix `U` in the Cholesky decomposition of X.
###Code
_X = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.float32)
X = torch.from_numpy(_X)
U = X.potrf(upper=True)
print(U)
###Output
2 6 -8
0 1 5
0 0 3
[torch.FloatTensor of size 3x3]
###Markdown
Q12. Compute the qr factorization of X.
###Code
_X = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)
X = torch.from_numpy(_X)
q, r = X.qr()
print("q=", q)
print("r=", r)
###Output
q=
-0.8571 0.3943 0.3314
-0.4286 -0.9029 -0.0343
0.2857 -0.1714 0.9429
[torch.FloatTensor of size 3x3]
r=
-14.0000 -21.0000 14.0000
0.0000 -175.0000 70.0000
0.0000 0.0000 -35.0000
[torch.FloatTensor of size 3x3]
###Markdown
Q13. Factor x by Singular Value Decomposition.
###Code
_X = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)
X = torch.from_numpy(_X)
U, s, V = X.svd()
print("U=", U)
print("s=", s)
print("V=", V)
###Output
U=
0 1 0 0
1 0 0 0
-0 0 0 -1
-0 0 1 0
[torch.FloatTensor of size 4x4]
s=
3.0000
2.2361
2.0000
0.0000
[torch.FloatTensor of size 4]
V=
-0.0000 0.4472 -0.0000 0.0000
-0.0000 0.0000 1.0000 0.0000
1.0000 0.0000 -0.0000 0.0000
0.0000 0.0000 0.0000 1.0000
0.0000 0.8944 0.0000 -0.0000
[torch.FloatTensor of size 5x4]
###Markdown
Matrix eigenvalues Q14. Compute the eigenvalues and right eigenvectors of X.
###Code
X = torch.Tensor([[2, 0, 0], [0, 3, 4], [0, 4, 9]])
e, v = X.eig(eigenvectors=True)
print("eigen values=", e)
print("eigen vectors=", v)
_e, _v = np.linalg.eig(X.numpy())
assert np.allclose(e.numpy()[:, 0], _e)
###Output
eigen values=
11 0
1 0
2 0
[torch.FloatTensor of size 3x2]
eigen vectors=
0.0000 0.0000 1.0000
0.4472 0.8944 0.0000
0.8944 -0.4472 0.0000
[torch.FloatTensor of size 3x3]
###Markdown
Norms Q15. Calculate the L2 norm of x.
###Code
x = torch.arange(-5, 5)
y = x.norm(p=2)
print(y)
assert y==np.sqrt((x**2).sum())
###Output
9.21954445729
###Markdown
Q16. Calculate the L1 norm of x.
###Code
x = torch.arange(-5, 5)
y = x.norm(p=1)
print(y)
assert y==x.abs().sum()
###Output
25.0
###Markdown
Inverting matrices Q17. Compute the inverse of X.
###Code
X = torch.Tensor([[1, 2], [3, 4]])
Y = X.inverse()
print(Y)
assert np.allclose(Y.numpy(), np.linalg.inv(X.numpy()))
###Output
-2.0000 1.0000
1.5000 -0.5000
[torch.FloatTensor of size 2x2]
|
legacy/clase19/jit.ipynb | ###Markdown
Diseño de software para cómputo científico---- Unidad 5: Integración con lenguajes de alto nivel con bajo nivel. Agenda de la Unidad 5- **JIT (Numba).**- Cython.- Integración de Python con FORTRAN.- Integración de Python con C. Mini repaso de decoradoresEstamos de acuerdo que esto:```pythondef func(): passfunc = dec(func)```y esto```python@decdef func(): pass```Son lo mismo? Mini repaso de decoradoresY también estamos de acuerdo que esto:```python@dec(param=1)def func(): pass```y esto```pythondef func(): passfunc = dec(param=1)(func)```Son lo mismo? Herramientas comunes
###Code
# vamos a hacer profiling
import timeit
import math
# vamos a plotear
%matplotlib inline
import matplotlib.pyplot as plt
# numpy
import numpy as np
###Output
_____no_output_____
###Markdown
JIT (just-in-time) Compilers- Es una técnica para mejorar el rendimiento de sistemas de programación que compilan a bytecode, consistente en traducir el bytecode a código máquina nativo en tiempo de ejecución. - En teoría los JIT generan mejor rendimiendo que compiladores tradicionales (AOT compilers).- JIT provee portabilidad entre arquitecturas.- Normalmente JIT se utiliza con un compilador AOT que genera algun tipo de código intermedio. Un poco de fractales
###Code
def mandel(x, y, max_iters):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z * z + c
if (z.real * z.real + z.imag * z.imag) >= 4:
return i
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = mandel(real, imag, iters)
image[y, x] = color
return image
###Output
_____no_output_____
###Markdown
Creemos el fractal
###Code
# creamos la imagen
image = np.zeros((500 * 2, 750 * 2), dtype=np.uint8)
# ejecutamos los calculos
normal = %timeit -o create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
# mostramos todo
plt.imshow(image, cmap="viridis");
###Output
3.88 s ± 154 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Entra numba
###Code
import numba
mandel = numba.jit(mandel)
create_fractal = numba.jit(create_fractal)
image = np.zeros((500 * 2, 750 * 2), dtype=np.uint8)
jited = %timeit -o create_fractal(-2.0, 1.0, -1.0, 1.0, image, 20)
plt.imshow(image, cmap="viridis");
###Output
46.6 ms ± 942 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Mejora
###Code
fig, axes = plt.subplots(ncols=2)
fig.set_size_inches(12, 3)
axes[0].plot(normal.all_runs)
axes[1].plot(jited.all_runs)
axes[0].set_title("Normal functions")
axes[1].set_title("Jited functions")
axes[0].set_ylabel("Seconds")
axes[1].set_ylabel("Seconds")
axes[0].set_xlabel("Run")
axes[1].set_xlabel("Run");
normal.best / jited.best
###Output
_____no_output_____
###Markdown
Qué es Numba?- Numba traduce las funciones de de un **subset** de Python a código de máquina optimizado en tiempo de ejecución utilizando la **LLVM**. - Los algoritmos numéricos compilados con Numba en Python pueden acercarse a las velocidades de C o FORTRAN.- Tambien brinda posibilidades de compilacion AOT y Eager compilation (compilación ansiosa)- Está diseñado para usarse con matrices y funciones NumPy. - Los decoradores especiales pueden crear funciones universales que se transmiten a través de matrices NumPy al igual que las funciones NumPy.![image.png](attachment:image.png) LLVM Compiler Infrastructure Project- Suministra las capas intermedias de un compilador completo, tomando el código en "forma intermedia" (IF) de un compilador y emitiendo un IF optimizado. - Este nuevo IF puede ser convertido a código de una máquina concreta.- Puede generar código máquina relocalizable en el momento de compilación, de enlazado, o en de ejecución.![image.png](attachment:image.png) Como funciona Numba![image.png](attachment:image.png) Alternativas- Julia es "mas-o-menos" esto. Es muchos aspectos es "menos" JIT y "más" AOT.- PyPy es un JIT general que reemplaza el de CPython.![image.png](attachment:image.png) Inferencia de tipos
###Code
@numba.jit
def add(a, b):
return a + b
%%time
add(1, 1)
%%time
add(1., 1)
%%time
add("hola ", "mundo")
add.nopython_signatures
###Output
_____no_output_____
###Markdown
Numba object-mode
###Code
class ConEspacio:
def __init__(self, v):
self.v = v
def __add__(self, other):
return f"{self.v} {other.v}"
add(ConEspacio("hola"), ConEspacio("mundo"))
###Output
<ipython-input-20-8f825e2463a3>:1: NumbaWarning: [1m
Compilation is falling back to object mode WITH looplifting enabled because Function "add" failed type inference due to: [1m[1mnon-precise type pyobject[0m
[0m[1mDuring: typing of argument at <ipython-input-20-8f825e2463a3> (3)[0m
[1m
File "<ipython-input-20-8f825e2463a3>", line 3:[0m
[1mdef add(a, b):
[1m return a + b
[0m [1m^[0m[0m
[0m
@numba.jit
/home/juan/proyectos/dis_ssw/lib/python3.8/site-packages/numba/core/object_mode_passes.py:177: NumbaWarning: [1mFunction "add" was compiled in object mode without forceobj=True.
[1m
File "<ipython-input-20-8f825e2463a3>", line 2:[0m
[[email protected]
[1mdef add(a, b):
[0m[1m^[0m[0m
[0m
warnings.warn(errors.NumbaWarning(warn_msg,
/home/juan/proyectos/dis_ssw/lib/python3.8/site-packages/numba/core/object_mode_passes.py:187: NumbaDeprecationWarning: [1m
Fall-back from the nopython compilation path to the object mode compilation path has been detected, this is deprecated behaviour.
For more information visit https://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit
[1m
File "<ipython-input-20-8f825e2463a3>", line 2:[0m
[[email protected]
[1mdef add(a, b):
[0m[1m^[0m[0m
[0m
warnings.warn(errors.NumbaDeprecationWarning(msg,
###Markdown
Inferencia de tipos
###Code
@numba.jit(nopython=True) # numba.njit
def add(a, b):
return a + b
add(1, 1)
add(ConEspacio("hola"), ConEspacio("mundo"))
###Output
_____no_output_____
###Markdown
Pero quiero usar object-mode sin warnings - Force Object mode
###Code
@numba.jit(forceobj=True)
def add(a, b):
return a + b
add(1, 2)
add(ConEspacio("hola"), ConEspacio("Mundo"))
###Output
_____no_output_____
###Markdown
Pero quiero usar object-mode sin warnings - Objeto Facade
###Code
@numba.njit
def numba_add(a, b):
return a + b
def add(a, b):
try:
return numba_add(a, b)
except numba.TypingError:
return a + b
add(ConEspacio("hola"), ConEspacio("Mundo"))
###Output
_____no_output_____
###Markdown
Pero quiero usar object-mode sin warnings -- Numba issue 4191- https://github.com/numba/numba/issues/4191- Coemeario: https://github.com/numba/numba/issues/3907issuecomment-500765025```[email protected](fallback=sum)def my_sum(a, b): return a + b``` Y versus numpy como andamos
###Code
def sincos(a, b):
return math.sin(a) * math.cos(b)
@numba.njit
def nb_sincos(a, b):
return math.sin(a) * math.cos(b)
def np_sincos(a, b):
return np.sin(a) * np.cos(b)
normal_run = %timeit -o sincos(1.5, 2.45)
nb_run = %timeit -o nb_sincos(1.5, 2.45)
np_run = %timeit -o np_sincos(1.5, 2.45)
###Output
222 ns ± 9.41 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
208 ns ± 6.59 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
1.61 µs ± 17.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Universal functions- Una función universal (o ufunc para abreviar) es una función que funciona en `ndarrays` de manera elemento por elemento, que admite el broadcasting de matrices y la conversión de tipos.
###Code
np.multiply(2, 2)
np.multiply([1, 2, 3, 4], 2)
np.multiply([1, 2., 3, 4], 2)
mtx = [
[1, 2, 3, 4],
[5, 6, 7, 8]
]
np.multiply(mtx, [0, 0, 1, 0])
###Output
_____no_output_____
###Markdown
Universal functions- Las funciones universales se definen en C- Este es un ejemplo del tutorial de la funcion *logit* **SOLO** para `double`.```Cstatic void double_logit(char **args, npy_intp *dimensions, npy_intp* steps, void* data){ npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; double tmp; for (i = 0; i < n; i++) { tmp = *(double *)in; tmp /= 1-tmp; in += in_step; out += out_step; }}``` Osea... NO Funciones vectorizadas- Funcionan igual pero el rendimiento es puro Python$logit(p) = \log{\frac{p}{1 - p}}$
###Code
@np.vectorize
def logit(p):
if 0 < p > 1:
raise ValueError()
elif p == 0:
return -np.inf
elif p == 1:
return np.inf
return math.log(p / (1. - p))
logit(.5)
logit([0, 0.25, 0.5, 0.75, 1])
logit([[1, 0], [.5, .75]])
arr = np.random.random(size=(1000, 1000))
n_logit = %timeit -o logit(arr)
###Output
388 ms ± 2.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Funciones vectorizadas -- Con Numba
###Code
@numba.vectorize
def logit(p):
if 0 < p > 1:
raise ValueError()
elif p == 0:
return -np.inf
elif p == 1:
return np.inf
return math.log(p / (1. - p))
nb_logit = %timeit -o logit(arr)
n_logit.best / nb_logit.best # (el año pasado daba 15 veces mejor)
###Output
_____no_output_____
###Markdown
Funciones vectorizadas -- Con Numba Parallel + Eager compilation
###Code
@numba.vectorize([numba.float64(numba.float64)], target="parallel")
def logit(p):
if 0 < p > 1:
raise ValueError()
elif p == 0:
return -np.inf
elif p == 1:
return np.inf
return math.log(p / (1. - p))
nb_logit = %timeit -o logit(arr)
n_logit.best / nb_logit.best # el año pasado era 81 veces mejor
###Output
_____no_output_____
###Markdown
AOT- Esto es si quiero distribuir modulos compilados en numba o que se compilen al momento de ejecución.- No funciona para universal functions (todavía).- Imaginemos que tenemos un modulo llamado `mult_src.py` con el siguiente código.
###Code
import numba
from numba.pycc import CC
cc = CC('mult')
@cc.export('mult', 'f8(f8, f8)')
@cc.export('mult', 'i4(i4, i4)')
def mult(a, b):
return a * b
###Output
_____no_output_____ |
wrangle_act_final.ipynb | ###Markdown
Wrangle and Analyze data Wrangling The entire dataset consists of 3 subsets:- The WeRateDogs Twitter archive summerized in the ```twitter-archive-enhanced.csv``` file- The tweet image neural network predictions of the breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network a.k.a. ```image-predictions.tsv``` - to be downloaded from https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv using Requests- Data pulled out from the Twitter API: retweet count and favorite count at minimum). Load necessary libraries Gather data
###Code
# for data structure, calculations and manipulations of the dataset
import pandas as pd
import numpy as np
# for visualisations
import matplotlib.pyplot as plt
# for importing data from a URL source
import requests as req
from io import StringIO as IO
# for importing Twitter data
import tweepy
# for converting json data to a python dictionary
import json
# for managing running time
import time
###Output
_____no_output_____
###Markdown
Prepare the first part - load the WeRateDogs Twitter archive
###Code
main_set = pd.read_csv("twitter-archive-enhanced.csv")
main_set.head(3)
###Output
_____no_output_____
###Markdown
Prepare the second part - neural network predictions of the dog breed
###Code
# Get the file from URL using Requests
sec_file = req.get('https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv')
# Prepare the string for input to the read_csv module
f = IO(sec_file.text)
# Prepare the dataframe
sec_data = pd.read_csv(f, sep='\t')
sec_data.head(3)
###Output
_____no_output_____
###Markdown
Prepare the third part - Twitter data
###Code
consumer_key = '...'
consumer_secret = '...'
access_token = '...'
access_secret = '...'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
# Create tweet_json file
tweet_json = open("tweet_json.txt", "w")
# query Twitter to gather json data of the tweets with id numbers from the main_set
start = time.time()
for i in range(main_set.tweet_id.shape[0]):
try:
tweet = api.get_status(main_set.tweet_id[i], tweet_mode='extended')
json.dump(tweet._json, tweet_json)
tweet_json.write('\n')
except tweepy.TweepError:
next
end = time.time()
print(i)
print(end - start)
# Seal the tweet_json file
tweet_json.close()
# Prepare for sifting out the interesting stuff from the tweet_json.txt file
id_list = []
place_list = []
rt_count_list = []
fav_count_list = []
retweeted_list = []
# Import interesting stuff from tweet_json.txt
with open("tweet_json.txt") as json_tweets:
for k in json_tweets:
line = json.loads(k)
id_list.append(line['id'])
place_list.append(line['place'])
rt_count_list.append(line['retweet_count'])
fav_count_list.append(line['favorite_count'])
retweeted_list.append(line['retweeted'])
# Convert the lists to a pandas dataframe
twitter_data = pd.DataFrame(
{
'tweet_id': id_list,
'place': place_list,
'retweet_count': rt_count_list,
'favorite_count': fav_count_list,
'retweeted': retweeted_list
}
)
###Output
_____no_output_____
###Markdown
After collecting the initial data it's time to marge them into a one dataset. Because all data have a common column - **tweet_id** it will be chosen for merging condition. The INNER JOIN will ensure that only tweets present in all 3 sets are analyzed, i.e. the number of missing data will be reduced.
###Code
# Join the datasets
main_sec = main_set.merge(sec_data, how='inner')
data = main_sec.merge(twitter_data, how='inner')
###Output
_____no_output_____
###Markdown
Assess After preparing the initial version of the dataset I embark onto the second step, that is **assessing** the data prior to the cleaning process.
###Code
# Get to know the number of nulls, datatype and names for all columns
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2066 entries, 0 to 2065
Data columns (total 32 columns):
tweet_id 2066 non-null int64
in_reply_to_status_id 23 non-null float64
in_reply_to_user_id 23 non-null float64
timestamp 2066 non-null object
source 2066 non-null object
text 2066 non-null object
retweeted_status_id 75 non-null float64
retweeted_status_user_id 75 non-null float64
retweeted_status_timestamp 75 non-null object
expanded_urls 2066 non-null object
rating_numerator 2066 non-null int64
rating_denominator 2066 non-null int64
name 2066 non-null object
doggo 2066 non-null object
floofer 2066 non-null object
pupper 2066 non-null object
puppo 2066 non-null object
jpg_url 2066 non-null object
img_num 2066 non-null int64
p1 2066 non-null object
p1_conf 2066 non-null float64
p1_dog 2066 non-null bool
p2 2066 non-null object
p2_conf 2066 non-null float64
p2_dog 2066 non-null bool
p3 2066 non-null object
p3_conf 2066 non-null float64
p3_dog 2066 non-null bool
place 1 non-null object
retweet_count 2066 non-null int64
favorite_count 2066 non-null int64
retweeted 2066 non-null bool
dtypes: bool(4), float64(7), int64(6), object(15)
memory usage: 476.1+ KB
###Markdown
Looking at the above summary I notice the following issues:Quality:- missing data in: in_reply_to_status_id- missing data in: in_reply_to_user_id- missing data in: retweeted_status_id- missing data in: retweeted_status_user_id- missing data in: retweeted_status_timestamp- missing data in: place- some of the tweets are retweets and need to be removed- the ```tweet_id``` is an integer and should be a string (it's not meant to be treated as a number and be added, multiplied, etc.)- there are columns from image predictions that need further processing, starting with p1, p2, p3 - these should be processed to provide p-value, object name and the name whether the object is a dog for the best image predictions- the data type of the rating numerator presented in the text is not consistent with the extracted type (some of the values are floats, whereas the extracted one are all integers)- the dog stage - there is no one value for each tweet; in some data there are two dog stages presentedTidy:- the data is spread across 3 separate dataframes- the dog stages: doggo, floofer, pupper, puppo are spread accross 4 columns and could be formed into 1
###Code
data.img_num.unique()
###Output
_____no_output_____
###Markdown
Dog stagesThe dog stages are split into 4 columns and have to be merged into 1, however the structure of the columns has to be inspected
###Code
print(data.doggo.unique())
print(data.floofer.unique())
print(data.pupper.unique())
print(data.puppo.unique())
###Output
['None' 'doggo']
['None' 'floofer']
['None' 'pupper']
['None' 'puppo']
###Markdown
Nice, it looks like apat from the names of the stage there is only ```None``` cathegory. RetweetsSecondly, I'd like to remove all retweets, i.e. tweets, for which retweeted parameter is ```True```. Boolean values are treated by sum function as integers with values 0 for ```False``` and 1 for ```True```. Thus, I can asses the number of retweets by summing the values in ```reweeted``` column.
###Code
data.retweeted.sum()
###Output
_____no_output_____
###Markdown
It looks like only ```False``` values are present in this column. Rating numerator/denominator problemsIt was noted that some problem with rating numerator/denominator is present in the data. Some values may have not been extracted properly. This can be assessed by looking at the numerators and denominators.
###Code
data.rating_numerator.unique()
data.rating_denominator.unique()
###Output
_____no_output_____
###Markdown
There are a lot of strangely looking values, e.g. 0 (That would be pretty ostracizing...), 165, 1776, etc. These will have to be corrected in the Cleaning phase. NamesLet's check out the unique values for the name column.
###Code
data.name.unique()
###Output
_____no_output_____
###Markdown
At the beginnning there are data, which do not look like real names, e.g. None, a, ...Let's see if sorting these values help.
###Code
sorted(data.name.unique())
###Output
_____no_output_____
###Markdown
Apart from None, there's a huge number of lower case words, which most probably are not real names... Object prediction namesThe objects predicted by the artificial neural network are not consistent - they start with small or capital letter and in some cases the words are connected using underscore. The underscore should be replaced by a single space and all names should start with a capital letter.
###Code
print(data.p1.unique())
print(data.p2.unique())
print(data.p3.unique())
###Output
['orange' 'Chihuahua' 'paper_towel' 'basset' 'Chesapeake_Bay_retriever'
'Appenzeller' 'Pomeranian' 'Irish_terrier' 'Pembroke' 'Samoyed'
'French_bulldog' 'golden_retriever' 'whippet' 'Siberian_husky' 'limousine'
'Mexican_hairless' 'kuvasz' 'convertible' 'pug' 'Blenheim_spaniel'
'Labrador_retriever' 'malamute' 'Italian_greyhound' 'chow'
'German_shepherd' 'Doberman' 'Eskimo_dog' 'web_site' 'Weimaraner' 'Saluki'
'tusker' 'street_sign' 'miniature_pinscher' 'German_short-haired_pointer'
'English_springer' 'tricycle' 'tabby' 'vizsla' 'Irish_setter' 'bloodhound'
'Bernese_mountain_dog' 'West_Highland_white_terrier' 'cocker_spaniel'
'flat-coated_retriever' 'Cardigan' 'Newfoundland' 'basketball'
'Shetland_sheepdog' 'komondor' 'kelpie' 'home_theater'
'Greater_Swiss_Mountain_dog' 'comic_book' 'laptop' 'Border_collie' 'pole'
'bull_mastiff' 'marmot' 'Staffordshire_bullterrier' 'Lakeland_terrier'
'Australian_terrier' 'syringe' 'envelope' 'Bedlington_terrier' 'lakeside'
'chimpanzee' 'Angora' 'revolver' 'Boston_bull' 'Old_English_sheepdog'
'black-and-tan_coonhound' 'grille' 'Great_Dane' 'barbell' 'prison'
'barrow' 'pencil_box' 'car_mirror' 'Yorkshire_terrier' 'tennis_ball'
'schipperke' 'patio' 'fountain' 'coffee_mug' 'sea_lion' 'Airedale'
'white_wolf' 'giant_schnauzer' 'Dandie_Dinmont' 'Irish_water_spaniel'
'washer' 'tub' 'ice_bear' 'military_uniform' 'Afghan_hound'
'Brittany_spaniel' 'teapot' 'swing' 'Leonberg' 'Border_terrier'
'Great_Pyrenees' 'birdhouse' 'Norwegian_elkhound'
'American_Staffordshire_terrier' 'shopping_cart' 'mortarboard' 'leopard'
'wooden_spoon' 'borzoi' 'Rottweiler' 'hyena' 'toilet_tissue'
'Rhodesian_ridgeback' 'ox' 'bath_towel' 'boxer' 'jersey' 'Pekinese'
'briard' 'toy_poodle' 'seat_belt' 'hippopotamus' 'teddy' 'collie'
'gas_pump' 'English_setter' 'cairn' 'dingo' 'cowboy_boot' 'bathtub'
'malinois' 'Saint_Bernard' 'Gordon_setter' 'school_bus' 'acorn_squash'
'standard_poodle' 'miniature_schnauzer' 'curly-coated_retriever'
'Tibetan_mastiff' 'dishwasher' 'restaurant' 'dalmatian' 'miniature_poodle'
'doormat' 'Siamese_cat' 'beagle' 'toy_terrier' 'loupe' 'Norwich_terrier'
'Arabian_camel' 'shield' 'bookcase' 'minivan' 'mousetrap' 'vacuum'
'Norfolk_terrier' 'dough' 'papillon' 'pedestal' 'wire-haired_fox_terrier'
'bakery' "jack-o'-lantern" 'refrigerator' 'remote_control' 'beach_wagon'
'porcupine' 'quilt' 'pool_table' 'padlock' 'sundial' 'clumber'
'long-horned_beetle' 'Walker_hound' 'giant_panda' 'silky_terrier'
'meerkat' 'Madagascar_cat' 'maillot' 'basenji' 'lion' 'coral_reef'
'keeshond' 'redbone' 'Welsh_springer_spaniel' 'African_grey' 'paddle'
'gondola' 'brown_bear' 'cougar' 'axolotl' 'radio_telescope' 'balloon'
'Sussex_spaniel' 'bookshop' 'skunk' 'cash_machine' 'bluetick' 'geyser'
'conch' 'rotisserie' 'goose' 'upright' 'American_black_bear' 'Shih-Tzu'
'muzzle' 'bow_tie' 'sulphur-crested_cockatoo' 'tiger_shark'
'Tibetan_terrier' 'binoculars' 'coil' 'traffic_light' 'rapeseed'
'sunglasses' 'lawn_mower' 'wombat' 'guinea_pig' 'cliff' 'cup'
'studio_couch' 'Maltese_dog' 'Lhasa' 'koala' 'polecat' 'hand_blower'
'stone_wall' 'window_shade' 'handkerchief' 'bonnet'
'soft-coated_wheaten_terrier' 'alp' 'badger' 'hammer' 'nail' 'timber_wolf'
'triceratops' 'espresso' 'llama' 'leatherback_turtle' 'cowboy_hat' 'bib'
'hotdog' 'agama' 'ram' 'wild_boar' 'space_heater' 'four-poster'
'Egyptian_cat' 'otter' 'wool' 'wallaby' 'tailed_frog' 'hamster'
'hummingbird' 'washbasin' 'grey_fox' 'weasel' 'beaver' 'groenendael'
'Scottish_deerhound' 'dining_table' 'ice_lolly' 'bison' 'snorkel'
'bighorn' 'motor_scooter' 'bustard' 'dogsled' 'snowmobile'
'Scotch_terrier' 'crane' 'Ibizan_hound' 'hog' 'bannister' 'sorrel'
'wood_rabbit' 'fiddler_crab' 'cheeseburger' 'damselfly' 'sliding_door'
'clog' 'minibus' 'cheetah' 'toyshop' 'bald_eagle' 'jigsaw_puzzle'
'frilled_lizard' 'carousel' 'bubble' 'Christmas_stocking' 'pillow'
'earthstar' 'EntleBucher' 'mailbox' 'panpipe' 'harp' 'maze'
'water_buffalo' 'Japanese_spaniel' 'common_iguana' 'guenon'
'black-footed_ferret' 'rain_barrel' 'book_jacket' 'Loafer' 'feather_boa'
'walking_stick' 'standard_schnauzer' 'box_turtle' 'stove' 'bow' 'ocarina'
'shower_curtain' 'seashore' 'tick' 'African_crocodile' 'soccer_ball'
'jellyfish' 'Brabancon_griffon' 'slug' 'ostrich' 'robin' 'toilet_seat'
'suit' 'water_bottle' 'Arctic_fox' 'ski_mask' 'leaf_beetle' 'sandbar'
'starfish' 'microwave' 'terrapin' 'killer_whale' 'carton' 'bee_eater'
'china_cabinet' 'prayer_rug' 'park_bench' 'hen' 'hermit_crab' 'zebra'
'picket_fence' 'African_hunting_dog' 'pitcher' 'microphone' 'flamingo'
'scorpion' 'lacewing' 'dhole' 'banana' 'peacock' 'sea_urchin'
'ping-pong_ball' 'platypus' 'mud_turtle' 'boathouse' 'snail' 'pot'
'piggy_bank' 'candle' 'cuirass' 'lynx' 'crash_helmet' 'bearskin'
'shopping_basket' 'king_penguin' 'canoe' 'trombone' 'lorikeet'
'fire_engine' 'ibex' 'electric_fan' 'hare' 'hay' 'swab' 'coho'
'three-toed_sloth' 'desktop_computer']
['bagel' 'Pekinese' 'malamute' 'Labrador_retriever' 'English_springer'
'Irish_terrier' 'Border_collie' 'Eskimo_dog' 'Irish_setter' 'Cardigan'
'Pomeranian' 'boxer' 'borzoi' 'Tibetan_mastiff' 'pug' 'redbone'
'tow_truck' 'Rhodesian_ridgeback' 'sea_lion' 'toy_terrier'
'Great_Pyrenees' 'sports_car' 'Chihuahua' 'shower_cap' 'Shih-Tzu'
'seat_belt' 'Siberian_husky' 'American_Staffordshire_terrier'
'Norwich_terrier' 'French_bulldog' 'malinois' 'miniature_pinscher'
'Chesapeake_Bay_retriever' 'dhole' 'Afghan_hound' 'Angora'
'Indian_elephant' 'umbrella' 'meerkat' 'beagle' 'vizsla' 'Boston_bull'
'window_screen' 'whippet' 'collie' 'golden_retriever'
'Welsh_springer_spaniel' 'Italian_greyhound' 'Saint_Bernard' 'Pembroke'
'Staffordshire_bullterrier' 'miniature_poodle' 'black-and-tan_coonhound'
'bloodhound' 'Sussex_spaniel' 'briard' 'macaque' 'sandbar' 'Appenzeller'
'envelope' 'kuvasz' 'papillon' 'groenendael' 'lakeside' 'Airedale'
'studio_couch' 'oxygen_mask' 'oscilloscope' 'toy_poodle' 'dock' 'gorilla'
'projectile' 'Siamese_cat' 'Tibetan_terrier' 'Doberman' 'beach_wagon'
'otterhound' 'Brabancon_griffon' 'dumbbell' 'dishwasher'
'Bernese_mountain_dog' 'purse' 'menu' 'bull_mastiff' 'Maltese_dog'
'kelpie' 'crossword_puzzle' 'prison' 'American_black_bear' 'Newfoundland'
'Ibizan_hound' 'cup' 'mink' 'Greater_Swiss_Mountain_dog' 'Leonberg'
'flat-coated_retriever' 'chow' 'racket' 'bucket' 'soccer_ball' 'Samoyed'
'bow_tie' 'Saluki' 'Blenheim_spaniel' 'rule' 'cocker_spaniel' 'monitor'
'doormat' 'can_opener' 'keeshond' 'shopping_basket' 'academic_gown'
'jaguar' 'sliding_door' 'ice_bear' 'white_wolf' 'muzzle' 'Border_terrier'
'soft-coated_wheaten_terrier' 'Norwegian_elkhound' 'bison' 'sleeping_bag'
'sweatshirt' 'German_short-haired_pointer' 'marmot' 'Great_Dane'
'Brittany_spaniel' 'dalmatian' 'Mexican_hairless' 'harvester'
'jigsaw_puzzle' 'curly-coated_retriever' 'snorkel' 'German_shepherd' 'tub'
'Norfolk_terrier' 'cab' 'shower_curtain' 'toilet_seat' 'basenji' 'palace'
'toyshop' 'web_site' 'pillow' 'Dandie_Dinmont' 'house_finch'
'Shetland_sheepdog' 'barrel' 'binoculars' 'entertainment_center'
'television' 'English_setter' 'black_widow' 'punching_bag' 'cairn'
'carton' 'bakery' 'hamster' 'bath_towel' 'fountain' 'Bedlington_terrier'
'Walker_hound' 'Lakeland_terrier' 'bluetick' 'saltshaker' 'lighter'
'basketball' 'silky_terrier' 'swab' 'jean' 'bathtub' 'EntleBucher'
'cougar' 'minivan' 'Persian_cat' 'schipperke' 'sea_urchin' 'cradle'
'dining_table' 'necklace' 'mongoose' 'cash_machine' 'basset' 'ox'
'hand-held_computer' 'fur_coat' 'Yorkshire_terrier' 'iPod' 'skunk'
'maillot' 'ram' 'cliff' 'boathouse' 'standard_poodle' 'Madagascar_cat'
'canoe' 'killer_whale' 'Lhasa' 'dam' 'confectionery' 'bathing_cap'
'Old_English_sheepdog' 'laptop' 'wallet' 'volcano' 'printer' 'crutch'
'teddy' 'lesser_panda' 'llama' 'West_Highland_white_terrier' 'sunglasses'
'wire-haired_fox_terrier' 'great_white_shark' 'plow' 'shovel' 'Weimaraner'
'dingo' 'barbershop' 'dugong' 'timber_wolf' 'guinea_pig' 'hog' 'sunglass'
'comic_book' 'swing' 'Windsor_tie' 'wig' 'table_lamp' 'English_foxhound'
'gibbon' 'solar_dish' 'goose' 'four-poster' 'feather_boa'
'computer_keyboard' 'beaver' 'breakwater' 'chain_mail' 'china_cabinet'
'patio' 'Arabian_camel' 'sock' 'badger' 'washbasin' 'lawn_mower'
'window_shade' 'hatchet' 'minibus' 'screw' 'weasel' 'coffee_mug'
'loggerhead' 'sombrero' 'handkerchief' 'Rottweiler' 'giant_panda'
'Gila_monster' 'quilt' 'turnstile' 'wombat' 'Scotch_terrier' 'tree_frog'
'peacock' 'banded_gecko' 'shoji' 'paper_towel' 'Christmas_stocking'
'spatula' 'Arctic_fox' 'grey_fox' 'neck_brace' 'seashore' 'radiator'
'hyena' 'koala' 'moped' 'mashed_potato' 'assault_rifle' 'affenpinscher'
'red_fox' 'pier' 'horse_cart' 'quail' 'medicine_chest' 'hotdog'
'Irish_wolfhound' 'common_newt' 'tennis_ball' 'stove' 'triceratops'
'spindle' 'black-footed_ferret' 'orange' 'torch' 'paddle' 'chimpanzee'
'tailed_frog' 'rifle' 'hen' 'cloak' 'mask' 'apron' 'hay' 'cannon'
'hair_slide' 'bannister' 'Australian_terrier' 'streetcar' 'snowmobile'
'crib' 'junco' 'trench_coat' 'folding_chair' 'Japanese_spaniel' 'polecat'
'water_buffalo' 'leafhopper' 'sandal' 'hare' 'miniature_schnauzer'
'terrapin' 'mailbox' 'rotisserie' 'quill' 'space_heater' 'sarong'
'promontory' 'nail' 'American_alligator' 'goldfish' 'mosquito_net'
'ice_lolly' 'Kerry_blue_terrier' 'bearskin' 'rhinoceros_beetle'
'prayer_rug' 'ashcan' 'Sealyham_terrier' 'corn' 'warthog' 'brown_bear'
'bighorn' 'bobsled' 'stingray' 'lifeboat' 'cockroach' 'grey_whale' 'crate'
'shopping_cart' 'toucan' 'platypus' 'water_bottle' 'cock' 'tick' 'snail'
'tiger' 'rain_barrel' 'wood_rabbit' 'accordion' 'coral_fungus' 'tarantula'
'sulphur_butterfly' 'tabby' 'home_theater' 'siamang' 'indri'
'European_gallinule' 'giant_schnauzer' 'cardigan' 'porcupine'
'spotted_salamander' 'birdhouse' 'slug' 'tray' 'hair_spray' 'lampshade'
'pickup' 'breastplate' 'police_van' 'bib' 'waffle_iron' 'coral_reef'
'knee_pad' 'frilled_lizard' 'toaster' 'pelican' 'bow' 'hamper' 'cornet'
'cowboy_boot' 'wallaby' 'hummingbird' 'spotlight' 'drake'
'African_hunting_dog' 'chain_saw' 'armadillo' 'standard_schnauzer'
'barracouta' 'otter' 'desk' 'komondor' 'mud_turtle']
['banana' 'papillon' 'kelpie' 'spatula' 'German_short-haired_pointer'
'Indian_elephant' 'ice_lolly' 'Pembroke' 'Chesapeake_Bay_retriever'
'Chihuahua' 'chow' 'muzzle' 'basenji' 'Staffordshire_bullterrier'
'redbone' 'Saluki' 'Labrador_retriever' 'English_setter' 'malamute'
'bull_mastiff' 'Weimaraner' 'shopping_cart' 'beagle' 'can_opener'
'Dandie_Dinmont' 'car_wheel' 'Boston_bull' 'Siamese_cat'
'Bernese_mountain_dog' 'kuvasz' 'pug' 'Eskimo_dog' 'Norfolk_terrier'
'Brabancon_griffon' 'Norwegian_elkhound' 'Newfoundland' 'dingo'
'flat-coated_retriever' 'Cardigan' 'cocker_spaniel' 'golden_retriever'
'koala' 'Persian_cat' 'ibex' 'traffic_light' 'clumber' 'bath_towel'
'Egyptian_cat' 'American_Staffordshire_terrier' 'Ibizan_hound'
'Appenzeller' 'toy_terrier' 'Irish_terrier' 'bathtub' 'Tibetan_mastiff'
'Greater_Swiss_Mountain_dog' 'vizsla' 'Great_Pyrenees' 'toy_poodle'
'standard_poodle' 'dalmatian' 'Pomeranian' 'bloodhound' 'swab'
'television' 'EntleBucher' 'book_jacket' 'printer' 'Saint_Bernard'
'collie' 'Pekinese' 'curly-coated_retriever' 'keeshond' 'wreck' 'weasel'
'Siberian_husky' 'teddy' 'boxer' 'German_shepherd' 'barber_chair'
'French_bulldog' 'Bouvier_des_Flandres' 'paper_towel' 'Lakeland_terrier'
'canoe' 'orangutan' 'hog' 'fountain' 'doormat' 'groenendael'
'Japanese_spaniel' 'bluetick' 'Border_collie' 'Rottweiler' 'convertible'
'English_foxhound' 'Sussex_spaniel' 'go-kart' 'file' 'pillow'
'crossword_puzzle' 'Shih-Tzu' 'malinois' 'bonnet' 'restaurant' 'sundial'
'wallaby' 'toilet_tissue' 'otter' 'white_wolf'
'West_Highland_white_terrier' 'Shetland_sheepdog' 'schipperke' 'envelope'
'coffeepot' 'screen' 'Yorkshire_terrier' 'carton' 'seat_belt'
'snow_leopard' 'grand_piano' 'Gordon_setter' 'meerkat' 'water_buffalo'
'poncho' 'Italian_greyhound' 'grey_fox' 'Leonberg' 'guinea_pig'
'tennis_ball' 'gibbon' 'conch' 'polecat' 'boathouse' 'cougar'
'soft-coated_wheaten_terrier' 'mousetrap' 'black-and-tan_coonhound'
'swing' 'Airedale' 'Irish_setter' 'Arctic_fox' 'Norwich_terrier'
'Maltese_dog' 'English_springer' 'miniature_pinscher' 'Samoyed' 'crane'
'tub' 'Doberman' 'Tibetan_terrier' 'umbrella' 'grocery_store' 'bow'
'standard_schnauzer' 'Irish_water_spaniel' 'otterhound' 'nail' 'bubble'
'Great_Dane' 'wire-haired_fox_terrier' 'Border_terrier' 'miniature_poodle'
'Welsh_springer_spaniel' 'guillotine' 'entertainment_center'
'Rhodesian_ridgeback' 'Christmas_stocking' 'paddlewheel' 'barbell'
'Australian_terrier' 'French_loaf' 'jigsaw_puzzle' 'brass'
'Kerry_blue_terrier' 'switch' 'Brittany_spaniel' 'limousine' 'echidna'
'three-toed_sloth' 'Lhasa' 'microwave' 'chain' 'hatchet' 'maze' 'sea_lion'
'refrigerator' 'paintbrush' 'menu' 'rhinoceros_beetle' 'shopping_basket'
'wombat' 'soccer_ball' 'macaque' 'pop_bottle' 'whippet' 'cheetah'
'lakeside' 'pier' 'basset' 'dhole' 'theater_curtain' 'pool_table' 'borzoi'
'briard' 'maraca' 'ice_bear' 'komondor' 'notebook' 'Walker_hound' 'purse'
'wool' 'tiger_cat' 'sliding_door' 'shower_curtain' 'rapeseed' 'tripod'
'Scottish_deerhound' 'titi' 'croquet_ball' 'sombrero' 'sunglass'
'Blenheim_spaniel' 'buckeye' 'scuba_diver' 'oxcart' 'mountain_tent'
'ballplayer' 'rain_barrel' 'space_shuttle' 'affenpinscher' 'minibus' 'wok'
'warthog' 'hippopotamus' 'snorkel' 'desktop_computer' 'parachute' 'barrow'
'mushroom' 'park_bench' 'valley' 'consomme' 'fur_coat' 'quilt'
'Old_English_sheepdog' 'mouse' 'mongoose' 'king_penguin' 'toilet_seat'
'passenger_car' 'prison' 'coyote' 'bannister' 'rotisserie' 'sandbar'
'bucket' 'feather_boa' 'hand_blower' 'giant_schnauzer'
'American_black_bear' 'viaduct' 'cairn' 'chime' 'cab' 'padlock'
'black-footed_ferret' 'Irish_wolfhound' 'pizza' 'cup' 'neck_brace'
'lumbermill' 'washbasin' 'golfcart' 'Mexican_hairless' 'mitten'
'wood_rabbit' 'beaver' 'silky_terrier' 'soap_dispenser' 'bullfrog'
'Angora' 'binder' 'eel' 'dugong' 'common_iguana' 'shower_cap' 'sunglasses'
'vacuum' 'Afghan_hound' 'marmot' 'pretzel' 'box_turtle' 'bow_tie'
'Arabian_camel' 'beacon' 'pickup' 'goose' 'rifle' 'bell_cote' 'hamster'
'shoji' 'hare' 'rock_crab' 'bagel' 'whiptail' 'racket'
'hand-held_computer' 'lion' 'agama' 'jaguar' 'wild_boar'
'miniature_schnauzer' 'loupe' 'cliff' 'toyshop' 'mask' 'pot' 'axolotl'
'assault_rifle' 'ox' 'ski_mask' 'electric_fan' 'abaya' 'wallet' 'ashcan'
'zebra' 'cuirass' 'squirrel_monkey' 'mosquito_net' 'chest' 'cloak'
'parallel_bars' 'green_lizard' 'space_heater' 'broccoli' 'greenhouse'
'Sealyham_terrier' 'loggerhead' 'joystick' 'kimono' 'screw' 'mink'
'coral_reef' 'leafhopper' 'European_fire_salamander' 'quill' 'Windsor_tie'
'shovel' 'pajama' 'bison' 'crayfish' 'moped' 'nipple' 'seashore'
'sea_cucumber' 'bassinet' 'giant_panda' 'llama' 'hammerhead' 'chimpanzee'
'chickadee' 'bookcase' 'steam_locomotive' 'bib' 'cowboy_boot' 'black_swan'
'snail' 'bathing_cap' 'red_wolf' 'prairie_chicken' 'swimming_trunks'
'plastic_bag' 'hen' 'drumstick' 'stinkhorn' 'wolf_spider' 'brown_bear'
'gorilla' 'common_newt' 'window_screen' 'bolete' 'wig' 'cardoon' 'wing'
'plunger' 'beach_wagon' 'bulletproof_vest' 'jersey' 'goldfish'
'balance_beam' 'ram' 'panpipe' 'badger' 'French_horn' 'Band_Aid'
'terrapin' 'triceratops' 'African_chameleon' 'African_grey' 'jeep'
'oscilloscope' 'lampshade' 'acorn' 'power_drill' 'gar' 'great_grey_owl'
'partridge']
###Markdown
Cleaning Now that I know the first issues with the data, it's time to clean it!
###Code
# Prepare a copy of the main dataset that will be processed
data_copy = data.copy()
###Output
_____no_output_____
###Markdown
First, I'll get rid of the columns with nulls, the columns with information that will not be processed further on and ```retweeted``` column, which contains only ```False``` values. There are some retweets present in the dataset - all the rows with non-null values in the ```retweet_``` columns. These should be removed.
###Code
# Removing retweets based on retweeted_status_id
data_copy = data_copy.drop(data_copy[data_copy.retweeted_status_id > 0].index)
# Test if everything went ok.
data_copy[data_copy.retweeted_status_id > 0].retweeted_status_id.count()
data_copy[data_copy.retweeted_status_id > 0].retweeted_status_user_id.count()
data_copy[data_copy.retweeted_status_id > 0].retweeted_status_timestamp.count()
# Dispose of columns with a lot of nulls (in_reply_to_status_id,
# in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id,
# retweeted_status_timestamp, place)
data_copy = data_copy.drop('in_reply_to_status_id', axis=1)
data_copy = data_copy.drop('in_reply_to_user_id', axis=1)
data_copy = data_copy.drop('retweeted_status_id', axis=1)
data_copy = data_copy.drop('retweeted_status_user_id', axis=1)
data_copy = data_copy.drop('retweeted_status_timestamp', axis=1)
data_copy = data_copy.drop('place', axis=1)
# Dispose of columns information that will not be used further on (source,
# expanded urls, jpg_url, img_num)
data_copy = data_copy.drop('source', axis=1)
data_copy = data_copy.drop('expanded_urls', axis=1)
data_copy = data_copy.drop('jpg_url', axis=1)
data_copy = data_copy.drop('img_num', axis=1)
# Drop the "retweeted" columnas it contains only False values
data_copy = data_copy.drop('retweeted', axis=1)
# Verify the result
data_copy.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1991 entries, 0 to 2065
Data columns (total 21 columns):
tweet_id 1991 non-null int64
timestamp 1991 non-null object
text 1991 non-null object
rating_numerator 1991 non-null int64
rating_denominator 1991 non-null int64
name 1991 non-null object
doggo 1991 non-null object
floofer 1991 non-null object
pupper 1991 non-null object
puppo 1991 non-null object
p1 1991 non-null object
p1_conf 1991 non-null float64
p1_dog 1991 non-null bool
p2 1991 non-null object
p2_conf 1991 non-null float64
p2_dog 1991 non-null bool
p3 1991 non-null object
p3_conf 1991 non-null float64
p3_dog 1991 non-null bool
retweet_count 1991 non-null int64
favorite_count 1991 non-null int64
dtypes: bool(3), float64(3), int64(5), object(10)
memory usage: 301.4+ KB
###Markdown
Tweet idsThese are integers and should be strings.
###Code
# Transform the tweet_ids into strings
data_copy['tweet_id'] = data_copy['tweet_id'].astype(str)
# Make sure it's done right
data_copy.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1991 entries, 0 to 2065
Data columns (total 21 columns):
tweet_id 1991 non-null object
timestamp 1991 non-null object
text 1991 non-null object
rating_numerator 1991 non-null int64
rating_denominator 1991 non-null int64
name 1991 non-null object
doggo 1991 non-null object
floofer 1991 non-null object
pupper 1991 non-null object
puppo 1991 non-null object
p1 1991 non-null object
p1_conf 1991 non-null float64
p1_dog 1991 non-null bool
p2 1991 non-null object
p2_conf 1991 non-null float64
p2_dog 1991 non-null bool
p3 1991 non-null object
p3_conf 1991 non-null float64
p3_dog 1991 non-null bool
retweet_count 1991 non-null int64
favorite_count 1991 non-null int64
dtypes: bool(3), float64(3), int64(4), object(11)
memory usage: 301.4+ KB
###Markdown
Rating numerator/denominatorSome values are looking strange, e.g. 0, 165, 1776... The rating should be extracted once again.
###Code
ratings = data_copy.text.str.extract('((?:\d+\.)?\d+)\/((?:\d+\.)?\d+)', expand=True)
# Check out the unique numerator values
ratings[0].unique()
# Check out the unique denominator values
ratings[1].unique()
###Output
_____no_output_____
###Markdown
Nice... Some floats appeared in the numerator section, which partly explained the strange values. Still, the 0 and other high values are still present, so I have to check if that really is the case. Perhaps the code doesn't work as I think it does.
###Code
pd.set_option('display.max_colwidth', -1)
# First, the 0 value
d = data_copy[data_copy['rating_numerator'] == 0].text
print(d)
###Output
244 When you're so blinded by your systematic plagiarism that you forget what day it is. 0/10 https://t.co/YbEJPkg4Ag
826 PUPDATE: can't see any. Even if I could, I couldn't reach them to pet. 0/10 much disappointment https://t.co/c7WXaB2nqX
Name: text, dtype: object
###Markdown
Nope, these are legit values. Every time a 0 is present something sad happened. No, let's check other strangely high numerators.
###Code
# Then all other numerators
for numer in [84, 24, 165, 1776, 204, 50, 99, 80, 45, 60, 44, 143, 121, 144, 88, 420]:
e = data_copy[data_copy['rating_numerator'] == numer].text
print("CHECKING NUMERATOR: {}".format(numer))
print(e)
###Output
CHECKING NUMERATOR: 84
340 The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd
Name: text, dtype: object
CHECKING NUMERATOR: 24
410 Meet Sam. She smiles 24/7 & secretly aspires to be a reindeer. \nKeep Sam smiling by clicking and sharing this link:\nhttps://t.co/98tB8y7y7t https://t.co/LouL5vdvxx
Name: text, dtype: object
CHECKING NUMERATOR: 165
729 Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE
Name: text, dtype: object
CHECKING NUMERATOR: 1776
796 This is Atticus. He's quite simply America af. 1776/10 https://t.co/GRXwMxLBkh
Name: text, dtype: object
CHECKING NUMERATOR: 204
918 Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv
Name: text, dtype: object
CHECKING NUMERATOR: 50
995 This is Bluebert. He just saw that both #FinalFur match ups are split 50/50. Amazed af. 11/10 https://t.co/Kky1DPG4iq
Name: text, dtype: object
CHECKING NUMERATOR: 99
1016 Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1
Name: text, dtype: object
CHECKING NUMERATOR: 80
1041 Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12
Name: text, dtype: object
CHECKING NUMERATOR: 45
1059 From left to right:\nCletus, Jerome, Alejandro, Burp, & Titson\nNone know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK
Name: text, dtype: object
CHECKING NUMERATOR: 60
1125 Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa
Name: text, dtype: object
CHECKING NUMERATOR: 44
1201 Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ
Name: text, dtype: object
CHECKING NUMERATOR: 143
1373 Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3
Name: text, dtype: object
CHECKING NUMERATOR: 121
1374 Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55
Name: text, dtype: object
CHECKING NUMERATOR: 144
1505 IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq
Name: text, dtype: object
CHECKING NUMERATOR: 88
1564 Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw
Name: text, dtype: object
CHECKING NUMERATOR: 420
1788 After so many requests... here you go.\n\nGood dogg. 420/10 https://t.co/yfAAo1gdeY
Name: text, dtype: object
###Markdown
It looks like two values are false: 24/7 in tweet 410 (as referring to availability 24h 7 days a week) and 50/50 in tweet 995 (where the real rating is 11/10.The 24/7 value should be removed from the set and the rating in tweet 995 changed to the true value.
###Code
condition247 = (data_copy.rating_numerator == 24) & (data_copy.rating_denominator == 7)
data_copy = data_copy.drop(data_copy[condition247].index , axis=0)
###Output
_____no_output_____
###Markdown
Check if the record was deleted properly.
###Code
e = data_copy[data_copy['rating_numerator'] == 24].text
print("CHECKING NUMERATOR: {}".format(24))
print(e)
###Output
CHECKING NUMERATOR: 24
Series([], Name: text, dtype: object)
###Markdown
Now, change the 50/50 rating to 11/10.
###Code
# As there is only one instance of 50/50 rating it can be called like this
condition5050 = (data_copy.rating_numerator == 50) & (data_copy.rating_denominator == 50)
# Change the 50/50 rating to 11/10
index5050 = data_copy[condition5050].index
index5050 = min(index5050)
data_copy.at[index5050, 'rating_numerator'] = 11
data_copy.at[index5050, 'rating_denominator'] = 10
# Test if the change went ok
print(data_copy.at[index5050, 'rating_numerator'], data_copy.at[index5050, 'rating_denominator'])
###Output
11 10
###Markdown
Now, I'll double check if similar cleaning is needed for the denominators: 70, 7, 150, 11, 170, 20, 50, 90, 80, 40, 130, 110, 120, 2.
###Code
# Checking the denominators
for numer in [70, 7, 150, 11, 170, 20, 50, 90, 80, 40, 130, 110, 120, 2]:
e = data_copy[data_copy['rating_denominator'] == numer].text
print("CHECKING DENOMINATOR: {}".format(numer))
print(e)
###Output
CHECKING DENOMINATOR: 70
340 The floofs have been released I repeat the floofs have been released. 84/70 https://t.co/NIYC820tmd
Name: text, dtype: object
CHECKING DENOMINATOR: 7
Series([], Name: text, dtype: object)
CHECKING DENOMINATOR: 150
729 Why does this never happen at my front door... 165/150 https://t.co/HmwrdfEfUE
Name: text, dtype: object
CHECKING DENOMINATOR: 11
870 After so many requests, this is Bretagne. She was the last surviving 9/11 search dog, and our second ever 14/10. RIP https://t.co/XAVDNDaVgQ
1399 This is Darrel. He just robbed a 7/11 and is in a high speed police chase. Was just spotted by the helicopter 10/10 https://t.co/7EsP8LmSp5
Name: text, dtype: object
CHECKING DENOMINATOR: 170
918 Say hello to this unbelievably well behaved squad of doggos. 204/170 would try to pet all at once https://t.co/yGQI3He3xv
Name: text, dtype: object
CHECKING DENOMINATOR: 20
961 Happy 4/20 from the squad! 13/10 for all https://t.co/eV1diwds8a
Name: text, dtype: object
CHECKING DENOMINATOR: 50
1059 From left to right:\nCletus, Jerome, Alejandro, Burp, & Titson\nNone know where camera is. 45/50 would hug all at once https://t.co/sedre1ivTK
1125 Here is a whole flock of puppers. 60/50 I'll take the lot https://t.co/9dpcw6MdWa
Name: text, dtype: object
CHECKING DENOMINATOR: 90
1016 Happy Saturday here's 9 puppers on a bench. 99/90 good work everybody https://t.co/mpvaVxKmc1
Name: text, dtype: object
CHECKING DENOMINATOR: 80
1041 Here's a brigade of puppers. All look very prepared for whatever happens next. 80/80 https://t.co/0eb7R1Om12
1564 Here we have an entire platoon of puppers. Total score: 88/80 would pet all at once https://t.co/y93p6FLvVw
Name: text, dtype: object
CHECKING DENOMINATOR: 40
1201 Happy Wednesday here's a bucket of pups. 44/40 would pet all at once https://t.co/HppvrYuamZ
Name: text, dtype: object
CHECKING DENOMINATOR: 130
1373 Two sneaky puppers were not initially seen, moving the rating to 143/130. Please forgive us. Thank you https://t.co/kRK51Y5ac3
Name: text, dtype: object
CHECKING DENOMINATOR: 110
1374 Someone help the girl is being mugged. Several are distracting her while two steal her shoes. Clever puppers 121/110 https://t.co/1zfnTJLt55
Name: text, dtype: object
CHECKING DENOMINATOR: 120
1505 IT'S PUPPERGEDDON. Total of 144/120 ...I think https://t.co/ZanVtAtvIq
Name: text, dtype: object
CHECKING DENOMINATOR: 2
2045 This is an Albanian 3 1/2 legged Episcopalian. Loves well-polished hardwood flooring. Penis on the collar. 9/10 https://t.co/d9NcXFKwLv
Name: text, dtype: object
###Markdown
There is a problem with three ratings:- 9/11 (referring to the tragic events of WTC) with proper rating of 14/10 (index 870)- 7/11 which refers to some other person/object, should be replaced with 10/10 (index 1399)- 1/2 (which refers to the dogs being 3 1/2 legged), whould be replaced with 9/10 (index 2045)
###Code
# Change the 9/11 rating to 14/10
data_copy.at[870, 'rating_numerator'] = 14
data_copy.at[870, 'rating_denominator'] = 10
# Test if the change went ok
print(data_copy.at[870, 'rating_numerator'], data_copy.at[870, 'rating_denominator'])
# Change the 7/11 rating to 10/10
data_copy.at[1399, 'rating_numerator'] = 10
data_copy.at[1399, 'rating_denominator'] = 10
# Test if the change went ok
print(data_copy.at[1399, 'rating_numerator'], data_copy.at[1399, 'rating_denominator'])
# Change the 1/2 rating to 9/10
data_copy.at[2045, 'rating_numerator'] = 9
data_copy.at[2045, 'rating_denominator'] = 10
# Test if the change went ok
print(data_copy.at[2045, 'rating_numerator'], data_copy.at[2045, 'rating_denominator'])
###Output
9 10
###Markdown
Clean the badly assessed names. These were identified in the Assess seciton as lowercased, so all lowercased content in the Name column will be replaced with ```None```.
###Code
condition_lower = (data_copy.name == data_copy.name.str.lower())
data_copy.loc[condition_lower == True, 'name'] = 'None'
# Check if the change went ok.
sorted(data_copy.name.unique())
###Output
_____no_output_____
###Markdown
The doggo, floofer, pupper and puppo should all be placed in a single column named ```dog_stage```.
###Code
stages = ['doggo', 'floofer', 'pupper', 'puppo']
data_copy['dog_stage'] = ''
data_copy = data_copy.reset_index(drop = True)
for n in range(data_copy.shape[0]):
dummy = []
for stags in stages:
if data_copy[stags][n] == stags:
dummy.append(data_copy[stags][n])
else:
next
if len(dummy) == 0:
data_copy['dog_stage'][n] = 'None'
else:
data_copy['dog_stage'][n] = ','.join(dummy)
# Verify the result
data_copy.dog_stage.unique()
###Output
_____no_output_____
###Markdown
The doggo, floofer, pupper, puppo columns can now be dropped.
###Code
data_copy = data_copy.drop('doggo', axis = 1)
data_copy = data_copy.drop('floofer', axis = 1)
data_copy = data_copy.drop('pupper', axis = 1)
data_copy = data_copy.drop('puppo', axis = 1)
# Verify the drop
data_copy.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1990 entries, 0 to 1989
Data columns (total 18 columns):
tweet_id 1990 non-null object
timestamp 1990 non-null object
text 1990 non-null object
rating_numerator 1990 non-null int64
rating_denominator 1990 non-null int64
name 1990 non-null object
p1 1990 non-null object
p1_conf 1990 non-null float64
p1_dog 1990 non-null bool
p2 1990 non-null object
p2_conf 1990 non-null float64
p2_dog 1990 non-null bool
p3 1990 non-null object
p3_conf 1990 non-null float64
p3_dog 1990 non-null bool
retweet_count 1990 non-null int64
favorite_count 1990 non-null int64
dog_stage 1990 non-null object
dtypes: bool(3), float64(3), int64(4), object(8)
memory usage: 239.1+ KB
###Markdown
Finally, the image prediction columns (starting with p1, p2, p3) should be replaced with a columns containing:- the highest p-value (```p_value```)- object prediction (```object_prediction```)- information whether the identified object is a dog (```is_dog```)
###Code
# Leaving only the best prediction in a row
is_dog = []
object_prediction = []
p_value = []
for s in range(data_copy.shape[0]):
#Choose the best prediction
x = (data_copy.loc[s, 'p1'], data_copy.loc[s, 'p1_conf'], data_copy.loc[s, 'p1_dog'])
y = (data_copy.loc[s, 'p2'], data_copy.loc[s, 'p2_conf'], data_copy.loc[s, 'p2_dog'])
z = (data_copy.loc[s, 'p3'], data_copy.loc[s, 'p3_conf'], data_copy.loc[s, 'p3_dog'])
max_p = max([x, y, z], key = lambda h: (x[1], y[1], z[1]))
# Append the result to the specific list
is_dog.append(max_p[2])
object_prediction.append(max_p[0])
p_value.append(max_p[1])
# Populate the new columns with the computed values
data_copy['is_dog'] = is_dog
data_copy['object_prediction'] = object_prediction
data_copy['p_value'] = p_value
# Verify the formation of the new columns
data_copy.head()
###Output
_____no_output_____
###Markdown
Drop the original image prediction columns.
###Code
data_copy = data_copy.drop('p1', axis = 1)
data_copy = data_copy.drop('p1_conf', axis = 1)
data_copy = data_copy.drop('p1_dog', axis = 1)
data_copy = data_copy.drop('p2', axis = 1)
data_copy = data_copy.drop('p2_conf', axis = 1)
data_copy = data_copy.drop('p2_dog', axis = 1)
data_copy = data_copy.drop('p3', axis = 1)
data_copy = data_copy.drop('p3_conf', axis = 1)
data_copy = data_copy.drop('p3_dog', axis = 1)
# Verify the deletion
data_copy.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1990 entries, 0 to 1989
Data columns (total 12 columns):
tweet_id 1990 non-null object
timestamp 1990 non-null object
text 1990 non-null object
rating_numerator 1990 non-null int64
rating_denominator 1990 non-null int64
name 1990 non-null object
retweet_count 1990 non-null int64
favorite_count 1990 non-null int64
dog_stage 1990 non-null object
is_dog 1990 non-null bool
object_prediction 1990 non-null object
p_value 1990 non-null float64
dtypes: bool(1), float64(1), int64(4), object(6)
memory usage: 173.0+ KB
###Markdown
Object prediction namesThe object prediction names should all start with a capisal letter and the underscores should be substituted with a single space.
###Code
data_copy.object_prediction = data_copy.object_prediction.str.capitalize()
data_copy.object_prediction = data_copy.object_prediction.str.replace('_', ' ')
data_copy.object_prediction.unique()
###Output
_____no_output_____
###Markdown
2. Storing, analyzing and visualizing The data should be stored in the ```twitter_archive_master.csv``` file.
###Code
data_copy.to_csv("twitter_archive_master.csv", index = False)
###Output
_____no_output_____
###Markdown
The wrangled data must now be analyzed and visialized. At least three (3) insights and one (1) visualization must be produced.
###Code
# Insight 1
data_copy.favorite_count.describe()
###Output
_____no_output_____
###Markdown
The tweets are very unequally rated. With a min of 77 and max of 163k this means 4 orders of magnitude... Still, 50% of them were favorited 1.9k-10.7k times, which is not that bad. The large spread explains the gigantic standard deviation of 12.7k.
###Code
# Insight 2
# In several hundred cases it was possible to identify dogs "stage"
ds = data_copy.dog_stage.value_counts()
h_tot = data_copy.dog_stage.count() - ds['None']
h1 = 100*ds['pupper'] / h_tot
h2 = 100*ds['doggo'] / h_tot
h3 = 100*ds['puppo'] / h_tot
h4 = 100*ds['doggo,pupper'] / h_tot
h5 = 100*ds['floofer'] / h_tot
h6 = 100*ds['doggo,floofer'] / h_tot
h7 = 100*ds['doggo,puppo'] / h_tot
plt.bar(x=['pupper', 'doggo', 'puppo', 'doggo,pupper', 'floofer', 'doggo,floofer', 'doggo,puppo'], height=[h1, h2, h3, h4, h5, h6, h7]);
plt.xlabel('Dog stage', size=20);
plt.xticks(size=15, rotation=90);
plt.ylabel('Percentage', size=20);
plt.yticks(size=15)
plt.title('Puppers rule!', size=30);
data_copy.dog_stage.value_counts()
###Output
_____no_output_____
###Markdown
You gotta admit that in these rare 300+ cases, in which the dog stage was identified, puppers are the most popular by being 66.34% of the pack.
###Code
# Insight 3
# There are some specific breeds that are posted more often than others
dogs = data_copy.object_prediction.value_counts().index.tolist()[:15]
heights = data_copy.object_prediction.value_counts()[:15]
bar1 = plt.bar(x=dogs, height=heights);
# Adding numerical counts near the bars
# After https://stackoverflow.com/questions/40489821/how-to-write-text-above-the-bars-on-a-bar-plot-python/40491960
for rect in bar1:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2.0, height, '%d' % int(height), ha='center', va='bottom')
plt.xlabel('Object recognized', size=20);
plt.xticks(size=15, rotation=90);
plt.ylabel('Count [a.u.]', size=20);
plt.yticks(size=15)
plt.title('The most popular WRD object is... ', size=30);
data_copy.p_value.describe(percentiles=[0.25,0.5,0.75,0.9, 0.95, 0.99])
###Output
_____no_output_____
###Markdown
Yep, clearly Retrievers are the favorites... Around 140 Golden retrievers, almost 94 Labrador retrievers, nearly 90 Pembrokes and 80 Chihuahuas. :] Even though, these results should be treated with caution (I've taken the maximum p-value for the prediction and but only about 10% of the predictions meet the alfa=0.05 threshold), the result is still intriguing.I guess this explains a bit why Labs and Golden Retrievers are one of the most popular dogs I see in the family drama movies. :PObviously, there are dogs and non-dogs in the above graph, but I consider it still a valid result. It would not be the first time I see/get a picture of a toy or some other stuff with a funny AND abstract comment. I find it hilarious. :]
###Code
conditioness = (data_copy.dog_stage == "pupper") & (data_copy.p_value >= 0.95)
data_copy.at[143,'text']
###Output
_____no_output_____ |
04-e2e-pipeline.ipynb | ###Markdown
End-to-end NVIDIA Merlin Recommender Sysem with Vertex AI.This notebook shows how deploy and execute an end-to-end Vertex Pipeline to run the NVIDIA Merlin recommendation system.The notebook covers the following:1. Training pipeline overview.2. Set pipeline configurations.3. Build pipeline container images.4. Configure pipeline parameters.5. Compile KFP pipeline.6. Submit pipeline to Vertex AI. 1. Training Pipeline Overview Setup
###Code
import os
import json
from datetime import datetime
from google.cloud import aiplatform as vertex_ai
from kfp.v2 import compiler
PROJECT_ID = 'merlin-on-gcp' # Change to your project Id.
REGION = 'us-central1' # Change to your region.
BUCKET = 'merlin-on-gcp' # Change to your bucket.
MODEL_NAME = 'deepfm'
MODEL_VERSION = 'v01'
MODEL_DISPLAY_NAME = f'criteo-hugectr-{MODEL_NAME}-{MODEL_VERSION}'
WORKSPACE = f'gs://{BUCKET}/{MODEL_DISPLAY_NAME}'
TRAINING_PIPELINE_NAME = f'merlin-training-pipeline'
BQ_DATASET_NAME = 'criteo_pipeline' # Set to your BigQuery dataset including the Criteo dataset.
BQ_LOCATION = 'us' # Set to your BigQuery dataset location.
BQ_TRAIN_TABLE_NAME = 'train'
BQ_VALID_TABLE_NAME = 'valid'
NVT_IMAGE_NAME = 'nvt_preprocessing'
NVT_IMAGE_URI = f'gcr.io/{PROJECT_ID}/{NVT_IMAGE_NAME}'
NVT_DOCKERFILE = 'src/Dockerfile.nvtabular'
HUGECTR_IMAGE_NAME = 'hugectr_training'
HUGECTR_ITMAGE_URI = f'gcr.io/{PROJECT_ID}/{HUGECTR_IMAGE_NAME}'
HUGECTR_DOCKERFILE = 'src/Dockerfile.hugectr'
###Output
_____no_output_____
###Markdown
2. Set Pipeline Configurations
###Code
os.environ['PROJECT_ID'] = PROJECT_ID
os.environ['REGION'] = REGION
os.environ['BUCKET'] = BUCKET
os.environ['WORKSPACE'] = WORKSPACE
os.environ['BQ_DATASET_NAME'] = BQ_DATASET_NAME
os.environ['BQ_LOCATION'] = BQ_LOCATION
os.environ['BQ_TRAIN_TABLE_NAME'] = BQ_TRAIN_TABLE_NAME
os.environ['BQ_VALID_TABLE_NAME'] = BQ_VALID_TABLE_NAME
os.environ['TRAINING_PIPELINE_NAME'] = TRAINING_PIPELINE_NAME
os.environ['MODEL_NAME'] = MODEL_NAME
os.environ['MODEL_VERSION'] = MODEL_VERSION
os.environ['MODEL_DISPLAY_NAME'] = MODEL_DISPLAY_NAME
os.environ['NVT_IMAGE_URI'] = NVT_IMAGE_URI
os.environ['HUGECTR_ITMAGE_URI'] = HUGECTR_ITMAGE_URI
os.environ['MEMORY_LIMIT'] = '120G'
os.environ['CPU_LIMIT'] = '32'
os.environ['GPU_LIMIT'] = '4'
os.environ['GPU_TYPE'] = 'nvidia-tesla-t4'
os.environ['MACHINE_TYPE'] = 'a2-highgpu-4g'
os.environ['ACCELERATOR_TYPE'] = 'NVIDIA_TESLA_A100'
os.environ['ACCELERATOR_NUM'] = '4'
os.environ['NUM_WORKERS'] = '12'
os.environ['NUM_SLOTS'] = '26'
os.environ['MAX_NNZ'] = '2'
os.environ['EMBEDDING_VECTOR_SIZE'] = '11'
os.environ['MAX_BATCH_SIZE'] = '64'
os.environ['MODEL_REPOSITORY_PATH'] = '/models'
###Output
_____no_output_____
###Markdown
3. Build Pipeline Container Images
###Code
! gcloud builds submit --timeout "2h" --tag {IMAGE_URI} {NVT_DOCKERFILE} --machine-type=e2-highcpu-8
! gcloud builds submit --timeout "2h" --tag {IMAGE_URI} {HUGECTR_DOCKERFILE} --machine-type=e2-highcpu-8
###Output
_____no_output_____
###Markdown
4. Configure pipeline parameters
###Code
NUM_EPOCHS = 0
MAX_ITERATIONS = 50000
EVAL_INTERVAL = 1000
EVAL_BATCHES = 500
EVAL_BATCHES_FINAL = 2500
DISPLAY_INTERVAL = 200
SNAPSHOT_INTERVAL = 0
PER_GPU_BATCHSIZE = 2048
LR = 0.001
DROPOUT_RATE = 0.5
parameter_values = {
'shuffle': json.dumps(None) # select PER_PARTITION, PER_WORKER, FULL, or None.
'per_gpu_batch_size': PER_GPU_BATCHSIZE,
'max_iter': MAX_ITERATIONS,
'max_eval_batches': EVAL_BATCHES ,
'eval_batches': EVAL_BATCHES_FINAL ,
'dropout_rate': DROPOUT_RATE,
'lr': LR ,
'num_epochs': NUM_EPOCHS,
'eval_interval': EVAL_INTERVAL,
'snapshot': SNAPSHOT_INTERVAL,
'display_interval': DISPLAY_INTERVAL
}
###Output
_____no_output_____
###Markdown
5. Compile KFP pipeline
###Code
from src.pipelines.training_pipelines import training_bq
compiled_pipeline_path = 'merlin_training_bq.json'
compiler.Compiler().compile(
pipeline_func=training_bq,
package_path=compiled_pipeline_path
)
###Output
_____no_output_____
###Markdown
6. Submit pipeline to Vertex AI
###Code
job_name = f'merlin_training_bq_{datetime.now().strftime("%Y%m%d%H%M%S")}'
pipeline_job = vertex_ai.PipelineJob(
display_name=job_name,
template_path=compiled_pipeline_path,
enable_caching=False,
parameter_values=parameter_values,
)
pipeline_job.run()
###Output
_____no_output_____ |
Local and global privacy.ipynb | ###Markdown
lesson 3
###Code
db,pdbs = create_db_and_parallels(20)
def query(db):
return db.sum()
full_db_result = query(db)
sensitivity = 0
for pdb in pdbs:
pdb_result = query(pdb)
db_distance = torch.abs(pdb_result- full_db_result)
if(db_distance >sensitivity):
sensitivity = db_distance
sensitivity
###Output
_____no_output_____
###Markdown
sensitivity = L1 sensitivity Project 3
###Code
def sensitivity(query,n_entries=1000):
db,pdbs = create_db_and_parallels(n_entries)
full_db_result = query(db)
sensiti = 0
for pdb in pdbs:
pdb_result = query(pdb)
db_distance = torch.abs(pdb_result- full_db_result)
if(db_distance >sensiti):
sensiti = db_distance
return sensiti
def query(db):
return db.float().mean()
sensitivity(query)
#0.5/5000
###Output
_____no_output_____
###Markdown
Project: Calculate L1 Sensitivity For Threshold
###Code
def query(db,thresh=5):
return (db.sum()> thresh).float()
db,pdbs = create_db_and_parallels(10)
query(db)
###Output
_____no_output_____
###Markdown
Perform a Differencing Attack on row 10
###Code
db , _ = create_db_and_parallels(100)
pbd = get_parallel_db(db,remove_index=5)
db[5]
# differencing attack using sun query
sum(db) - sum(pbd)
sum(db)
# differencing attack using mean
(sum(db).float()/len(db)) - (sum(pbd).float()/len(pbd))
# differencing attack using threshold
(sum(db).float()>49) - (sum(pbd).float()>49)
import numpy as np
###Output
_____no_output_____
###Markdown
Project Implement Local Differential Privacy Randomize Response
###Code
db , pdbs = create_db_and_parallels(100)
t_result = torch.mean(db.float())
t_result
st_c_flip = (torch.rand(len(db))>0.5).float() # 50% hornest
nd_c_flip = (torch.rand(len(db))>0.5).float() # mean 0.5
st_c_flip
nd_c_flip
# if first coin flip is 1 then use it
db.float() * st_c_flip
# lie
augmented_db=db.float()* st_c_flip+(1-st_c_flip) * (nd_c_flip) # skew for noice
torch.mean(db.float())
torch.mean(augmented_db.float())
# if we knew
torch.mean(augmented_db.float()) * 2 -0.5
db_result =torch.mean(augmented_db.float()) * 2 -0.5
# so Implement query
def query2(db):
t_result = torch.mean(db.float())
st_c_flip = (torch.rand(len(db))>0.5).float() # 50% hornest
nd_c_flip = (torch.rand(len(db))>0.5).float() # mean 0.5
augmented_db=db.float()* st_c_flip+(1-st_c_flip) * (nd_c_flip) # skew for noice
db_result =torch.mean(augmented_db.float()) * 2 -0.5
return db_result, t_result
db , pdbs = create_db_and_parallels(100)
private , trueN=query2(db)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
db , pdbs = create_db_and_parallels(1000)
private , trueN=query2(db)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
db , pdbs = create_db_and_parallels(10000)
private , trueN=query2(db)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
###Output
with noise tensor(0.4918)
with out noise tensor(0.5022)
###Markdown
>> LOCAL GURANTEED protect without bias but data hungry for more acc Re skew mean for global
###Code
def query3(db,noise = 0.2):
t_result = torch.mean(db.float())
st_c_flip = (torch.rand(len(db))>noise).float() # 50% hornest
nd_c_flip = (torch.rand(len(db))>0.5).float() # mean 0.5 Depend on 1st coin flip
augmented_db=db.float()* st_c_flip+(1-st_c_flip) * (nd_c_flip) # skew for noice
skew_result = augmented_db.float().mean()
private_result = ((skew_result/noise)-0.5) * noise/(1-noise)
return private_result, t_result
true_dist_mean = 0.7 # 70% say YES
noise_dist_mean = 0.5 # 50/50
augmented_db = db.float()(true_dist_mean + noise_dist_mean) /2
#
#revese to adjest a true mean
#
db , pdbs = create_db_and_parallels(100)
private , trueN=query3(db,noise=0.2)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
db , pdbs = create_db_and_parallels(100)
private , trueN=query3(db,noise=0.5)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
db , pdbs = create_db_and_parallels(100)
private , trueN=query3(db,noise=0.8)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
db , pdbs = create_db_and_parallels(1000)
private , trueN=query3(db,noise=0.8)
print("with noise "+str(private) )
print("with out noise "+str(trueN) )
###Output
_____no_output_____
###Markdown
MORE Noise = more secure - filter unique , understand the general characteristics (consistant) - more data => better and secure Differential Privacy Query
###Code
db , pdbs = create_db_and_parallels(100)
def M(db):
np.raplace
query(db)+ noice
epsilon = 0.00001
import numpy as np
db , pdbs = create_db_and_parallels(100)
sum(db) # sensitivity = 0 (0,1)
def sum_query(db):
return db.sum()
def laplacian_machanism(db,query,sensitivity):
beta = sensitivity / epsilon
noise = torch.tensor(np.random.laplace(0,beta,1)) # mean 0 to 1
return query(db) + noise
def mean_query(db):
return torch.mean(db.float())
laplacian_machanism(db,sum_query,1)
laplacian_machanism(db,mean_query,1/100)# 100 is entry
# epsilon too low (highest Protection) but more randomize
###Output
_____no_output_____
###Markdown
Generating Differentially Private Labels For a Dataset
###Code
import numpy as np
num_teachers = 10 # 10 hospital
num_example = 10000# size of dataset
num_label = 10
preds = (np.random.rand(num_teachers,num_example)* num_label) .astype(int) #fake label
preds[:,0] #1st guess of 10 teacher
###Output
_____no_output_____
###Markdown
Remote Execute
###Code
bob.clear_object()
###Output
_____no_output_____ |
playbook/tactics/defense-evasion/T1027.ipynb | ###Markdown
T1027 - Obfuscated Files or InformationAdversaries may attempt to make an executable or file difficult to discover or analyze by encrypting, encoding, or otherwise obfuscating its contents on the system or in transit. This is common behavior that can be used across different platforms and the network to evade defenses. Payloads may be compressed, archived, or encrypted in order to avoid detection. These payloads may be used during Initial Access or later to mitigate detection. Sometimes a user's action may be required to open and [Deobfuscate/Decode Files or Information](https://attack.mitre.org/techniques/T1140) for [User Execution](https://attack.mitre.org/techniques/T1204). The user may also be required to input a password to open a password protected compressed/encrypted file that was provided by the adversary. (Citation: Volexity PowerDuke November 2016) Adversaries may also used compressed or archived scripts, such as JavaScript. Portions of files can also be encoded to hide the plain-text strings that would otherwise help defenders with discovery. (Citation: Linux/Cdorked.A We Live Security Analysis) Payloads may also be split into separate, seemingly benign files that only reveal malicious functionality when reassembled. (Citation: Carbon Black Obfuscation Sept 2016)Adversaries may also obfuscate commands executed from payloads or directly via a [Command and Scripting Interpreter](https://attack.mitre.org/techniques/T1059). Environment variables, aliases, characters, and other platform/language specific semantics can be used to evade signature based detections and application control mechanisms. (Citation: FireEye Obfuscation June 2017) (Citation: FireEye Revoke-Obfuscation July 2017)(Citation: PaloAlto EncodedCommand March 2017) Atomic Tests
###Code
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
###Output
_____no_output_____
###Markdown
Atomic Test 1 - Decode base64 Data into ScriptCreates a base64-encoded data file and decodes it into an executable shell scriptUpon successful execution, sh will execute art.sh, which is a base64 encoded command, that stdouts `echo Hello from the Atomic Red Team`.**Supported Platforms:** macos, linux Attack Commands: Run with `sh````shsh -c "echo ZWNobyBIZWxsbyBmcm9tIHRoZSBBdG9taWMgUmVkIFRlYW0= > /tmp/encoded.dat"cat /tmp/encoded.dat | base64 -d > /tmp/art.shchmod +x /tmp/art.sh/tmp/art.sh```
###Code
Invoke-AtomicTest T1027 -TestNumbers 1
###Output
_____no_output_____
###Markdown
Atomic Test 2 - Execute base64-encoded PowerShellCreates base64-encoded PowerShell code and executes it. This is used by numerous adversaries and malicious tools.Upon successful execution, powershell will execute an encoded command and stdout default is "Write-Host "Hey, Atomic!"**Supported Platforms:** windows Attack Commands: Run with `powershell````powershell$OriginalCommand = 'Write-Host "Hey, Atomic!"'$Bytes = [System.Text.Encoding]::Unicode.GetBytes($OriginalCommand)$EncodedCommand =[Convert]::ToBase64String($Bytes)$EncodedCommandpowershell.exe -EncodedCommand $EncodedCommand```
###Code
Invoke-AtomicTest T1027 -TestNumbers 2
###Output
_____no_output_____
###Markdown
Atomic Test 3 - Execute base64-encoded PowerShell from Windows RegistryStores base64-encoded PowerShell code in the Windows Registry and deobfuscates it for execution. This is used by numerous adversaries and malicious tools.Upon successful execution, powershell will execute encoded command and read/write from the registry.**Supported Platforms:** windows Attack Commands: Run with `powershell````powershell$OriginalCommand = 'Write-Host "Hey, Atomic!"'$Bytes = [System.Text.Encoding]::Unicode.GetBytes($OriginalCommand)$EncodedCommand =[Convert]::ToBase64String($Bytes)$EncodedCommandSet-ItemProperty -Force -Path HKCU:Software\Microsoft\Windows\CurrentVersion -Name Debug -Value $EncodedCommandpowershell.exe -Command "IEX ([Text.Encoding]::UNICODE.GetString([Convert]::FromBase64String((gp HKCU:Software\Microsoft\Windows\CurrentVersion Debug).Debug)))"```
###Code
Invoke-AtomicTest T1027 -TestNumbers 3
###Output
_____no_output_____
###Markdown
Atomic Test 4 - Execution from Compressed FileMimic execution of compressed executable. When successfully executed, calculator.exe will open.**Supported Platforms:** windows Dependencies: Run with `powershell`! Description: T1027.exe must exist on disk at specified location Check Prereq Commands:```powershellif (Test-Path %temp%\temp_T1027.zip\T1027.exe) {exit 0} else {exit 1}``` Get Prereq Commands:```powershell[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12Invoke-WebRequest "https://github.com/redcanaryco/atomic-red-team/raw/master/atomics/T1027/bin/T1027.zip" -OutFile "$env:temp\T1027.zip"Expand-Archive -path "$env:temp\T1027.zip" -DestinationPath "$env:temp\temp_T1027.zip\" -Force```
###Code
Invoke-AtomicTest T1027 -TestNumbers 4 -GetPreReqs
###Output
_____no_output_____
###Markdown
Attack Commands: Run with `command_prompt````command_prompt"%temp%\temp_T1027.zip\T1027.exe"```
###Code
Invoke-AtomicTest T1027 -TestNumbers 4
###Output
_____no_output_____ |
ETL_Naccrra_OMCC_Provider_Database.ipynb | ###Markdown
ETL for Naccrra OMCC Provider Database
###Code
# import following library in terminal
# $ brew cask install chromedriver
#install all libraries
!pip install pandas
!pip install sqlalchemy
!pip install numpy
!pip install splinter
!pip install bs4
!pip install ipython-sql
!pip install selenium
#import libraries that will be utilized
import os
import csv
import pandas as pd
import json
from sqlalchemy import create_engine
import numpy as np
###Output
_____no_output_____
###Markdown
Extract data from Naccrra Online Database
###Code
from splinter import Browser
from bs4 import BeautifulSoup
import time
from selenium.common.exceptions import ElementNotVisibleException
# Use splinter to set up chrome driver
executable_path = {'executable_path': '/usr/local/bin/chromedriver'}
browser = Browser('chrome', **executable_path, headless=False)
pg_num = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 30,
31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, 54, 55, 56]
res = []
for i in pg_num:
url = f'http://naccrrapps.naccrra.org/navy/directory/programs.php?program=omcc&state=CA&pagenum={i}'
#chromdriver will visit url provided
browser.visit(url)
#set timer to wait 3 seconds
time.sleep(3)
# Parse HTML with Beautiful Soup
html=browser.html
soup = BeautifulSoup(html, 'html.parser')
# Use Beautiful Soup's find method to navigate and retrieve attributes
table_rows = soup.table.find_all('tr')
#iterate through table data
for tr in table_rows:
td = tr.find_all('td')
row = [tr.text.strip() for tr in td if tr.text.strip()]
print(row)
if(row):
res.append(row)
###Output
[]
['La Rue Park Child Development Center', 'Child Care Center', '50 Atrium Way', 'Davis', 'CA', '95616', '(530) 753-8716', '[email protected]']
['Carousel Preschool', 'Child Care Center', '8333 Airport Blvd.', 'Los Angeles', 'CA', '90045', '(310) 216-6641', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '2329 Vehicle Dr', 'Rancho Cordova', 'CA', '95670', '(916) 635-5700', '[email protected]']
['Adventure Club - Quail Glen', 'Child Care Center', '1250 Canevari Dr', 'Roseville', 'CA', '95747', '(916) 772-7529', '[email protected]']
['Adventure Club - Coyote Ridge', 'Child Care Center', '1751 Morningstar Dr', 'Roseville', 'CA', '95747', '(916) 772-7271', '[email protected]']
['Azusa Discovery Center', 'Child Care Center', '155 W. Arrow Highway', 'Azusa', 'CA', '91702', '(626) 334-1806', '[email protected]']
["Light of Christ Children's Center", 'Child Care Center', '341 South Kalmia Street', 'Escondido', 'CA', '92025', '(760) 745-6849', '[email protected]']
['Little Peoples Corner Preschool/daycare', 'Child Care Center', '3844 Walnut Drive #c', 'Eureka', 'CA', '95503', '(707) 445-0339', '[email protected]']
['Kindercare Learning Center LLC', 'Child Care Center', '5448 San Juan Ave', 'Citrus Heights', 'CA', '95610', '(916) 961-5599', '[email protected]']
['Olive Knolls Christian School', 'Child Care Center', '6201 Fruitvale Ave.', 'Bakersfield', 'CA', '93308', '(661) 393-3566', '[email protected]']
['Peace Lutheran Early Childhood Education Center', 'Child Care Center', '924 San Juan Rd', 'Sacramento', 'CA', '95834', '(916) 927-4060', '[email protected]']
['Kids on Campus', 'Child Care Center', '6889 El Fuerte Street', 'Carlsbad', 'CA', '92009', '(760) 744-4776', '[email protected]']
['West Redding Preschool', 'Child Care Center', '3490 Placer Street', 'Redding', 'CA', '96001', '(530) 243-2225', '[email protected]']
['Kiddie Academy Child Care Learning Center', 'Child Care Center', '3766 Mission Ave. Suite # 110', 'Oceanside', 'CA', '92058', '(760) 439-5552', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '2415 S Centre City Pkwy', 'Escondido', 'CA', '92025', '(760) 745-2474', '[email protected]']
['Action Day Learning Center', 'Child Care Center', '400 Stafford Street', 'Folsom', 'CA', '95630', '(916) 985-0976', '[email protected]']
["Childtime Children's Center", 'Child Care Center', '9903 Camino Media', 'Bakersfield', 'CA', '93311', '(661) 665-7790', '[email protected]']
['La Quinta Preschool', 'Child Care Center', '49-955 Moon River Dr', 'La Quinta', 'CA', '92253', '(760) 564-2848', '[email protected]']
['Heather Ridge Child Care Center', 'Child Care Center', '820 St Marks Street', 'Redding', 'CA', '96003', '(530) 241-7226', '[email protected]']
["Childtime Children's Center", 'Child Care Center', '2320 Floyd Ave.', 'Modesto', 'CA', '95355', '(209) 551-0255', '[email protected]']
['Adventure Club - Heritage Oak Elem. Sch.', 'Child Care Center', '2271 Americana Way', 'Roseville', 'CA', '95747', '(916) 773-3959', '[email protected]']
['KinderCare Learning Center', 'Child Care Center', '917 Hampshire Rd', 'Westlake Village', 'CA', '91361', '(805) 494-5152', '[email protected]']
['Girls Inc. Of Greater North Santa Barbara', 'Child Care Center', '4973 Hollister Ave', 'Goleta', 'CA', '93110', '(805) 967-0319', '[email protected]']
['KinderCare Education LLC', 'Child Care Center', '11961 Perris Blvd', 'Moreno Valley', 'CA', '92557', '(951) 243-6558', '[email protected]']
['Treehouse Christian Preschool 2', 'Child Care Center', '3301 Coffee Rd.', 'Modesto', 'CA', '95355', '(209) 312-9449', '[email protected]']
[]
['Sefi Day School', 'Child Care Center', '380 Telegraph Canyon Rd.', 'Chula Vista', 'CA', '91910', '(619) 422-7115', '[email protected]']
['Bright Horizons Tierrasanta', 'Child Care Center', '6090 Santo Rd.', 'San Diego', 'CA', '92124', '(858) 467-1800', '[email protected]']
['Merryhill School', 'Child Care Center', '10250 Trinity Parkway', 'Stockton', 'CA', '95219', '(209) 474-0518', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '2515 West Sunflower Ave', 'Santa Ana', 'CA', '92704', '(714) 540-4750', '[email protected]']
['Michelle Harris Family Child Care', 'Family Child Care Home', 'N/A', 'San Diego', 'CA', '92126', '(858) 586-7239', '[email protected]']
['La Petite Academy', 'Child Care Center', '12668 Sabre Springs Pkwy', 'San Diego', 'CA', '92128', '(858) 486-7197', '[email protected]']
['KinderCare Education LLC', 'Child Care Center', '3655 Via Mercado', 'La Mesa', 'CA', '91941', '(619) 670-9388', '[email protected]']
['Barajas Family Child Care', 'Family Child Care Home', 'N/A', 'San Jacinto', 'CA', '92583', '(714) 334-1531', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '2515 E. South St.', 'Anaheim', 'CA', '92806', '(714) 774-5141', '[email protected]']
['Noahs Ark Learning Center', 'Child Care Center', '1410 Foothill Drive', 'Vista', 'CA', '92084', '(760) 724-5445', '[email protected]']
['Heaven Sent', 'Child Care Center', '520 Pine Ave.', 'Pacific Grove', 'CA', '93950', '(831) 373-1922', '[email protected]']
['Tutor Time Child Care/Learning Center', 'Child Care Center', '26624 Margarita Rd', 'Murrieta', 'CA', '92563', '(951) 461-7900', '[email protected]']
['Narinder Family Child Care Home', 'Family Child Care Home', 'N/A', 'Yuba City', 'CA', '95993', '(530) 674-5991', '[email protected]']
['ABC Child Care Village', 'Child Care Center', '40045 Village Road', 'Temecula', 'CA', '92591', '(951) 491-0940', '[email protected]']
['Gonzalez Haydee Family Child Care', 'Family Child Care Home', 'N/A', 'Oceanside', 'CA', '92056', '(760) 631-2674', '[email protected]']
['La Petite Academy at Vista', 'Child Care Center', '725 Shadowridge Dr', 'Vista', 'CA', '92083', '(760) 727-0648', '[email protected]']
['La Jolla Family YMCA- Torrey Pines Elementary', 'Child Care Center', '8350 Cliffridge Ave', 'La Jolla', 'CA', '92037', '(858) 453-3483', '[email protected]']
['Kindercare Learning Center LLC', 'Child Care Center', '10130 Rothgard Road', 'Spring Valley', 'CA', '91977', '(619) 670-6566', '[email protected]']
['Janet Anseline Welsh', 'Family Child Care Home', 'N/A', 'Oceanside', 'CA', '92057', '(760) 724-9246', '[email protected]']
['Elizabeth Garza Family Child Care', 'Family Child Care Home', 'N/A', 'Vista', 'CA', '92084', '(760) 724-2759', '[email protected]']
['La Petite Academy Inc-Grand Ave', 'Child Care Center', '722 S. Grand Avenue', 'Diamond Bar', 'CA', '91765', '(909) 860-4009', '[email protected]']
["Childtime Children's Center", 'Child Care Center', '5565 Lake Park Way', 'La Mesa', 'CA', '91942', '(619) 460-0310', '[email protected]']
['Merry Go Round Learning Center', 'Child Care Center', '2749 Lemon Grove Ave', 'Lemon Grove', 'CA', '91945', '(619) 469-7281', '[email protected]']
['Eberly Day Care', 'Family Child Care Home', 'N/A', 'Oceanside', 'CA', '92056', '(760) 806-4710', '[email protected]']
['Learning Center Child Development Preschool', 'Child Care Center', '17565 Los Alamos St', 'Fountain Valley', 'CA', '92708', '(714) 593-0753', '[email protected]']
[]
['Ivy Crest Montessori Private School', 'Child Care Center', '2025 E. Chapman Ave.', 'Fullerton', 'CA', '92831', '(714) 879-6091', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '455 E Foothill Blvd', 'San Dimas', 'CA', '91773', '(909) 599-0597', '[email protected]']
["Kids' Care Club", 'Child Care Center', '9995 Carmel Mountain Road #B8', 'San Diego', 'CA', '92129', '(858) 538-5437', '[email protected]']
['Kindercare Learning Center LLC', 'Child Care Center', '5396 Walnut Avenue', 'Irvine', 'CA', '92604', '(949) 551-6808', '[email protected]']
['Poway Country Preschool', 'Child Care Center', '14411 Norwalk Lane', 'Poway', 'CA', '92064', '(858) 486-4420', '[email protected]']
['Hillcrest Country Day School', 'Child Care Center', '2000 West Rd', 'La Habra Heights', 'CA', '90631', '(562) 943-8332']
['Hobbledehoy Montessori Preschool', 'Child Care Center', '2321 Jane Ln.', 'Mountain View', 'CA', '94043', '(650) 968-1155', '[email protected]']
['Barry Ted Moskowitz Child Care Center', 'Child Care Center', '880 Front Street Suite 1295', 'San Diego', 'CA', '92101', '(619) 539-5569', '[email protected]']
['Montessori School of Kearny Mesa', 'Child Care Center', '3411 Sandrock Road', 'San Diego', 'CA', '92123', '(858) 505-0332', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '8887 Vintage Park Dr', 'Sacramento', 'CA', '95828', '(916) 682-1111', '[email protected]']
['New Life Figarden School', 'Child Care Center', '4230 W Figarden Dr', 'Fresno', 'CA', '93722', '(559) 229-8687', '[email protected]']
["Childtime Children's Center", 'Child Care Center', '14700 Perris Blvd', 'Moreno Valley', 'CA', '92553', '(951) 242-0707', '[email protected]']
['Campos Maritza Family Child Care', 'Family Child Care Home', 'N/A', 'San Diego', 'CA', '92115', '(619) 583-7180', '[email protected]']
['Brown Family Child Care', 'Family Child Care Home', 'N/A', 'Murrieta', 'CA', '92563', '(619) 565-8667', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '2354 Fenton Street', 'Chula Vista', 'CA', '91914', '(619) 656-9853', '[email protected]']
['Sunshine House - Loma Vista Kid Zone', 'Child Care Center', '2110 San Jose Ave', 'Brentwood', 'CA', '94513', '(925) 513-1113', '[email protected]']
['Sylvia Alvarez-Tostado', 'Family Child Care Home', 'N/A', 'San Diego', 'CA', '92154', '(619) 575-6791', '[email protected]']
['Paulette Johnson', 'Family Child Care Home', 'N/A', 'San Diego', 'CA', '92113', '(619) 527-0463', '[email protected]']
['La Petite Academy Inc', 'Child Care Center', '1910 W Kettleman Lane', 'Lodi', 'CA', '95240', '(209) 368-0303', '[email protected]']
['Discovery Isle Child Development Center', 'Child Care Center', '32220 Temecula Parkway', 'Temecula', 'CA', '92592', '(951) 303-3055', '[email protected]']
['STARS Preschool', 'Child Care Center', '1402 Golden Hill Rd', 'Paso Robles', 'CA', '93446', '(805) 238-0200', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '265 W. Grantline Road', 'Tracy', 'CA', '95376', '(209) 835-9247', '[email protected]']
['KinderCare Learning Center LLC', 'Child Care Center', '610 E Nuevo Rd', 'Perris', 'CA', '92571', '(951) 943-6476', '[email protected]']
['Bradshaw Christian School', 'Child Care Center', '8324 Bradshaw Rd', 'Sacramento', 'CA', '95829', '(916) 688-0521', '[email protected]']
['Owensbrown, Stephanie Family Child Care', 'Family Child Care Home', 'N/A', 'Chula Vista', 'CA', '91913', '(619) 621-1764', '[email protected]']
###Markdown
Convert data into Dataframe
###Code
#convert list to dataframe
omcc_providers_df = pd.DataFrame(res, columns=["Provider Name", "Type of Care", "Address", "City", "State", "Zip", "Phone", "Email"])
#rename columns
omcc_providers_df.rename(columns = {'Provider Name': 'provider_name',
'Type of Care': 'type_of_care',
'Address': 'address',
'City': 'city',
'State': 'state',
'Zip': 'zip',
'Phone': 'phone',
'Email': 'email'}, inplace = True)
#remove blank rows
omcc_providers_df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False)
omcc_providers_df
#dataframe
omcc_providers_df['zip'] = omcc_providers_df['zip'].astype(int)
omcc_providers_df.dtypes
###Output
_____no_output_____
###Markdown
Run SQL in Jupyter Notebook
###Code
%load_ext sql
DB_ENDPOINT = "localhost"
DB = 'child_care_centers_db'
DB_USER = 'postgres'
DB_PASSWORD = #enter password here
DB_PORT = '5432'
# postgresql://username:password@host:port/database
conn_string = "postgresql://{}:{}@{}:{}/{}" \
.format(DB_USER, DB_PASSWORD, DB_ENDPOINT, DB_PORT, DB)
print(conn_string)
%sql $conn_string
###Output
_____no_output_____
###Markdown
Connect to local database
###Code
rds_connection_string = "postgres:postgres@localhost:5432/child_care_centers_db"
engine = create_engine(f'postgresql://{rds_connection_string}')
#check for tables
engine.table_names()
#put dataframe into postgreSQL
omcc_providers_df.to_sql(name='naccrra_table', con=engine, if_exists='append', index=False)
###Output
_____no_output_____
###Markdown
Confirm data has been added by querying the `naccrra_table` table
###Code
pd.read_sql_query('select * from naccrra_table', con=engine).head()
###Output
_____no_output_____ |
BasicSimulation.ipynb | ###Markdown
###Code
!pip install qiskit
from qiskit import QuantumCircuit, assemble, Aer
qc_output = QuantumCircuit(8)
qc_output.measure_all()
qc_output.draw(initial_state=True)
from qiskit.visualization import plot_histogram
##Aer provides the simulations
from qiskit.providers.aer.backends import aer_simulator
simul = Aer.get_backend('aer_simulator')
##Storing the results of aer_simulator in the 'result' variable
result = simul.run(qc_output).result()
counts = result.get_counts()
plot_histogram(counts)
###Output
_____no_output_____ |
16_SQL_Assignments/SQL Assignment.ipynb | ###Markdown
SQL Assignment
###Code
import pandas as pd
import sqlite3
from IPython.display import display, HTML
# Note that this is not the same db we have used in course videos, please download from this link
# https://drive.google.com/file/d/1O-1-L1DdNxEK6O6nG2jS31MbrMh-OnXM/view?usp=sharing
conn = sqlite3.connect("Db-IMDB-Assignment.db")
###Output
_____no_output_____
###Markdown
Overview of all tables
###Code
tables = pd.read_sql_query("SELECT NAME AS 'Table_Name' FROM sqlite_master WHERE type='table'",conn)
tables = tables["Table_Name"].values.tolist()
for table in tables:
query = "PRAGMA TABLE_INFO({})".format(table)
schema = pd.read_sql_query(query,conn)
print("Schema of",table)
display(schema)
print("-"*100)
print("\n")
###Output
Schema of Movie
###Markdown
Useful tips:1. the year column in 'Movie' table, will have few chracters other than numbers which you need to be preprocessed, you need to get a substring of last 4 characters, its better if you convert it as int type, ex: CAST(SUBSTR(TRIM(m.year),-4) AS INTEGER)2. For almost all the TEXT columns we have show, please try to remove trailing spaces, you need to use TRIM() function3. When you are doing count(coulmn) it won't consider the "NULL" values, you might need to explore other alternatives like Count(*) Q1 --- List all the directors who directed a 'Comedy' movie in a leap year. (You need to check that the genre is 'Comedy’ and year is a leap year) Your query should return director name, the movie name, and the year. To determine whether a year is a leap year, follow these steps: STEP-1: If the year is evenly divisible by 4, go to step 2. Otherwise, go to step 5. STEP-2: If the year is evenly divisible by 100, go to step 3. Otherwise, go to step 4. STEP-3: If the year is evenly divisible by 400, go to step 4. Otherwise, go to step 5. STEP-4: The year is a leap year (it has 366 days). STEP-5: The year is not a leap year (it has 365 days).Year 1900 is divisible by 4 and 100 but it is not divisible by 400, so it is not a leap year.
###Code
%%time
def grader_1(q1):
q1_results = pd.read_sql_query(q1,conn)
print(q1_results.head(10))
assert (q1_results.shape == (232,3))
query1 = """ *** Write your query for the question 1 *** """
grader_1(query1)
###Output
director name movie name mod_year
0 Amit Mitra Jagte Raho 1956
1 Chetan Anand Funtoosh 1956
2 Satyen Bose Jagriti 1956
3 Mohan Segal New Delhi 1956
4 S.U. Sunny Kohinoor 1960
5 Bimal Roy Parakh 1960
6 R.K. Nayyar Love in Simla 1960
7 K. Shankar Rajkumar 1964
8 Shakti Samanta Kashmir Ki Kali 1964
9 Ram Mukherjee Leader 1964
(232, 3)
Wall time: 205 ms
###Markdown
Q2 --- List the names of all the actors who played in the movie 'Anand' (1971)
###Code
%%time
def grader_2(q2):
q2_results = pd.read_sql_query(q2,conn)
print(q2_results.head(10))
assert (q2_results.shape == (17,1))
query2 = """ *** Write your query for the question 2 *** """
grader_2(query2)
###Output
Actor_Names
0 Amitabh Bachchan
1 Rajesh Khanna
2 Sumita Sanyal
3 Ramesh Deo
4 Seema Deo
5 Asit Kumar Sen
6 Dev Kishan
7 Atam Prakash
8 Lalita Kumari
9 Savita
Wall time: 43 ms
###Markdown
Q3 --- List all the actors who acted in a film before 1970 and in a film after 1990. (That is: 1990.)
###Code
%%time
def grader_3a(query_less_1970, query_more_1990):
q3_a = pd.read_sql_query(query_less_1970,conn)
q3_b = pd.read_sql_query(query_more_1990,conn)
return (q3_a.shape == (4942,1)) and (q3_b.shape == (62572,1))
query_less_1970 =""" *** write the query to get all the id's of actors who acted before 1970 *** """
query_more_1990 =""" *** write the query to get all the id's of actors who acted after 1990 *** """
print(grader_3a(query_less_1970, query_more_1990))
# using the above two queries, you can find the answer to the given question
%%time
def grader_3(q3):
q3_results = pd.read_sql_query(q3,conn)
print(q3_results.head(10))
assert (q3_results.shape == (300,1))
query3 = """ *** Write your query for the question 3 *** """
grader_3(query3)
###Output
Actor_Name
0 Rishi Kapoor
1 Amitabh Bachchan
2 Asrani
3 Zohra Sehgal
4 Parikshat Sahni
5 Rakesh Sharma
6 Sanjay Dutt
7 Ric Young
8 Yusuf
9 Suhasini Mulay
Wall time: 227 ms
###Markdown
Q4 --- List all directors who directed 10 movies or more, in descending order of the number of movies they directed. Return the directors' names and the number of movies each of them directed.
###Code
%%time
def grader_4a(query_4a):
query_4a = pd.read_sql_query(query_4a,conn)
print(query_4a.head(10))
return (query_4a.shape == (1464,2))
query_4a =""" *** Write a query, which will return all the directors(id's) along with the number of movies they directed *** """
print(grader_4a(query_4a))
# using the above query, you can write the answer to the given question
%%time
def grader_4(q4):
q4_results = pd.read_sql_query(q4,conn)
print(q4_results.head(10))
assert (q4_results.shape == (58,2))
query4 = """ *** Write your query for the question 4 *** """
grader_4(query4)
###Output
Director_Name Movie_Count
0 David Dhawan 39
1 Mahesh Bhatt 35
2 Priyadarshan 30
3 Ram Gopal Varma 30
4 Vikram Bhatt 29
5 Hrishikesh Mukherjee 27
6 Yash Chopra 21
7 Basu Chatterjee 19
8 Shakti Samanta 19
9 Subhash Ghai 18
Wall time: 32 ms
###Markdown
Q5.a --- For each year, count the number of movies in that year that had only female actors.
###Code
%%time
# note that you don't need TRIM for person table
def grader_5aa(query_5aa):
query_5aa = pd.read_sql_query(query_5aa,conn)
print(query_5aa.head(10))
return (query_5aa.shape == (8846,3))
query_5aa =""" *** Write your query that will get movie id, and number of people for each geneder *** """
print(grader_5aa(query_5aa))
def grader_5ab(query_5ab):
query_5ab = pd.read_sql_query(query_5ab,conn)
print(query_5ab.head(10))
return (query_5ab.shape == (3469, 3))
query_5ab =""" *** Write your query that will have at least one male actor try to use query that you have written above *** """
print(grader_5ab(query_5ab))
# using the above queries, you can write the answer to the given question
%%time
def grader_5a(q5a):
q5a_results = pd.read_sql_query(q5a,conn)
print(q5a_results.head(10))
assert (q5a_results.shape == (4,2))
query5a = """ *** Write your query for the question 5a *** """
grader_5a(query5a)
###Output
YEAR Female_Cast_Only_Movies
0 1939 1
1 1999 1
2 2000 1
3 2018 1
Wall time: 264 ms
###Markdown
Q5.b --- Now include a small change: report for each year the percentage of movies in that year with only female actors, and the total number of movies made that year. For example, one answer will be: 1990 31.81 13522 meaning that in 1990 there were 13,522 movies, and 31.81% had only female actors. You do not need to round your answer.
###Code
%%time
def grader_5b(q5b):
q5b_results = pd.read_sql_query(q5b,conn)
print(q5b_results.head(10))
assert (q5b_results.shape == (4,3))
query5b = """ *** Write your query for the question 5b *** """
grader_5b(query5b)
###Output
YEAR Percentage_Female_Only_Movie Total_Movies
0 1939 0.500000 2
1 1999 0.015152 66
2 2000 0.015625 64
3 2018 0.009615 104
Wall time: 324 ms
###Markdown
Q6 --- Find the film(s) with the largest cast. Return the movie title and the size of the cast. By "cast size" we mean the number of distinct actors that played in that movie: if an actor played multiple roles, or if it simply occurs multiple times in casts, we still count her/him only once.
###Code
%%time
def grader_6(q6):
q6_results = pd.read_sql_query(q6,conn)
print(q6_results.head(10))
assert (q6_results.shape == (3473, 2))
query6 = """ *** Write your query for the question 5b *** """
grader_6(query6)
###Output
title count
0 Ocean's Eight 238
1 Apaharan 233
2 Gold 215
3 My Name Is Khan 213
4 Captain America: Civil War 191
5 Geostorm 170
6 Striker 165
7 2012 154
8 Pixels 144
9 Yamla Pagla Deewana 2 140
Wall time: 232 ms
###Markdown
Q7 --- A decade is a sequence of 10 consecutive years. For example, say in your database you have movie information starting from 1931. the first decade is 1931, 1932, ..., 1940, the second decade is 1932, 1933, ..., 1941 and so on. Find the decade D with the largest number of films and the total number of films in D.
###Code
%%time
def grader_7a(q7a):
q7a_results = pd.read_sql_query(q7a,conn)
print(q7a_results.head(10))
assert (q7a_results.shape == (78, 2))
query7a = """ *** Write a query that computes number of movies in each year *** """
grader_7a(query7a)
# using the above query, you can write the answer to the given question
%%time
def grader_7b(q7b):
q7b_results = pd.read_sql_query(q7b,conn)
print(q7b_results.head(10))
assert (q7b_results.shape == (713, 4))
query7b = """
***
Write a query that will do joining of the above table(7a) with itself
such that you will join with only rows if the second tables year is <= current_year+9 and more than or equal current_year
***
"""
grader_7b(query7b)
# if you see the below results the first movie year is less than 2nd movie year and
# 2nd movie year is less or equal to the first movie year+9
# using the above query, you can write the answer to the given question
%%time
def grader_7(q7):
q7_results = pd.read_sql_query(q7,conn)
print(q7_results.head(10))
assert (q7_results.shape == (1, 2))
query7 = """ *** Write a query that will return the decade that has maximum number of movies ***"""
grader_7(query7)
# if you check the output we are printinng all the year in that decade, its fine you can print 2008 or 2008-2017
###Output
Decade_Movie_Count Decade
0 1203 2008-2009-2010-2011-2012-2013-2014-2015-2016-2017
Wall time: 21 ms
###Markdown
Q8 --- Find all the actors that made more movies with Yash Chopra than any other director.
###Code
%%time
def grader_8a(q8a):
q8a_results = pd.read_sql_query(q8a,conn)
print(q8a_results.head(10))
assert (q8a_results.shape == (73408, 3))
query8a = """ *** Write a query that will results in number of movies actor-director worked together ***"""
grader_8a(query8a)
# using the above query, you can write the answer to the given question
%%time
def grader_8(q8):
q8_results = pd.read_sql_query(q8,conn)
print(q8_results.head(10))
print(q8_results.shape)
assert (q8_results.shape == (245, 2))
query8 = """ *** Write a query that answers the 8th question ***"""
grader_8(query8)
###Output
Name count
0 Jagdish Raj 11
1 Manmohan Krishna 10
2 Iftekhar 9
3 Shashi Kapoor 7
4 Rakhee Gulzar 5
5 Waheeda Rehman 5
6 Ravikant 4
7 Achala Sachdev 4
8 Neetu Singh 4
9 Leela Chitnis 3
(245, 2)
Wall time: 864 ms
###Markdown
Q9 --- The Shahrukh number of an actor is the length of the shortest path between the actor and Shahrukh Khan in the "co-acting" graph. That is, Shahrukh Khan has Shahrukh number 0; all actors who acted in the same film as Shahrukh have Shahrukh number 1; all actors who acted in the same film as some actor with Shahrukh number 1 have Shahrukh number 2, etc. Return all actors whose Shahrukh number is 2.
###Code
%%time
def grader_9a(q9a):
q9a_results = pd.read_sql_query(q9a,conn)
print(q9a_results.head(10))
print(q9a_results.shape)
assert (q9a_results.shape == (2382, 1))
query9a = """ *** Write a query that answers the 9th question ***"""
grader_9a(query9a)
# using the above query, you can write the answer to the given question
# selecting actors who acted with srk (S1)
# selecting all movies where S1 actors acted, this forms S2 movies list
# selecting all actors who acted in S2 movies, this gives us S2 actors along with S1 actors
# removing S1 actors from the combined list of S1 & S2 actors, so that we get only S2 actors
%%time
def grader_9(q9):
q9_results = pd.read_sql_query(q9,conn)
print(q9_results.head(10))
print(q9_results.shape)
assert (q9_results.shape == (25698, 1))
query9 = """ *** Write a query that answers the 9th question ***"""
grader_9(query9)
###Output
Actor_Name
0 Freida Pinto
1 Rohan Chand
2 Damian Young
3 Waris Ahluwalia
4 Caroline Christl Long
5 Rajeev Pahuja
6 Michelle Santiago
7 Alicia Vikander
8 Dominic West
9 Walton Goggins
(25698, 1)
Wall time: 591 ms
|
Covid19_Testing_Importance.ipynb | ###Markdown
Covid-19 Testing Importance IntroductionI believe that testing is one of the most crucial parts of dealing with an epidemic virus. Testing helps us identify and isolate positive cases. The more tests you perform, the faster you isolate the case preventing them from coming into contact with others, **slowing the rate of transmission**. This will be performed by "merging" the information of two data sources:* [Our World In Data Covid-19 Tests](https://ourworldindata.org/coronavirus-testing-source-data) for the number of tests of each country* [John Hopkins Datasets](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series) for the cases, recovered and deaths of each country **Date this notebook was written: 23/3/2020** **Disclaimer**: *In any way I do not want to point my finger on the governments and people of countries. It is not possible to know what were the reasons and the circumstances that lead to a lack of testing. My only target is to see if the data suggests that testing has a major role on this specific epidemic.*
###Code
import requests
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import spearmanr
from bs4 import BeautifulSoup
###Output
_____no_output_____
###Markdown
Data sources Link to the created dataset: https://github.com/MikeXydas/Weekend-EDAs/blob/master/datasets/covid_testing_importance.csv The first thing we want to find is **how many tests** have been performed on each country. The data will be extracted from this page: https://ourworldindata.org/coronavirus-testing-source-data . You may observe that the dates of last report are not the same for each country. This will be taken into account.
###Code
response = requests.get('https://ourworldindata.org/coronavirus-testing-source-data')
soup = BeautifulSoup(response.content)
table_soup = soup.find("div", {"class": "tableContainer"}).findAll("tr")[1:] # skip the headers
# RegEx to extract the country name, number of test, date
reg = re.compile("<tr><td>(.+)<\/td><td>([\d,]+)<\/td><td>([\w\s]+)<\/td><td>.*<\/td><td>.*<\/td><\/tr>")
testing_data = []
for row in table_soup:
reg_res = reg.match(str(row))
if reg_res is not None:
testing_data.append(reg_res.groups())
else:
# Print the rows that failed to be parsed
print(row)
# Create the dataframe
testing_df = pd.DataFrame(testing_data, columns=["Country", "Tests", "Date"])
testing_df['Tests'] = testing_df['Tests'].str.replace(',','').astype(int) # Transform the x,xxx string to integers
testing_df['Date'] = pd.to_datetime(testing_df['Date']) # Cast Date from string to date type
# Set country name as index of dataframe
testing_df = testing_df.set_index('Country', drop=False)
# For Canada and Australia we have info about specific regions
# Since on this notebook we will examine country response we merge them
# Australia, dropping all the province info and keeping the aggregated "Australia"
testing_df = testing_df[~testing_df.Country.str.contains("Australia –")]
# Canada
testing_df = testing_df[~testing_df.Country.str.contains("Canada –")]
# Usa has two trackers I will keep the most recent one
testing_df = testing_df.drop('United States – CDC samples tested')
testing_df = testing_df.drop("Hong Kong")
# We rename some countries so as to have the expected country name with the John Hopkins dataset
testing_df = testing_df.rename({
"China – Guangdong": "China",
"United States": "US",
"Czech Republic": "Czechia",
"South Korea": "Korea, South",
"Taiwan": "Taiwan*",
"Faeroe Islands":"Faroe Islands"
})
# Drop Palestine since it is not included on the John Hopkins dataset
testing_df = testing_df.drop("Palestine")
# Drop Faroe Islands since they are merged with Denmark
testing_df = testing_df.drop("Faroe Islands")
testing_df = testing_df.drop('Country', axis=1)
display(testing_df)
###Output
<tr><td>Canada</td><td></td><td></td><td></td><td></td><td>An aggregate figure for Canada is not provided given that the extent to which double-counting between the provincial labs and the national lab (NML) is unclear. See province level data and that for NML above. No figures have yet been found for Nunavut, Manitoba, Yukon and Newfoundland and Labrador provinces (collectively ~5% of population).</td></tr>
<tr><td>Kuwait</td><td></td><td>17 Mar 2020</td><td><a href="https://drive.google.com/file/d/1pVBq-c4HLeUis_BS58xT_jJTJZni2JfR/view?usp=sharing" rel="noreferrer noopener" target="_blank">Communication from International Press Office at the Ministry of Information in the State of Kuwait, 17 March 2020.</a></td><td>17 Mar 2020</td><td>In an earlier version of this dataset we reported an estimate of 120,000 tests based on an official letter sent to us by the Ministry of Information. After this, they sent a second email correcting their estimate to 27,000, and then a further correction to 20,000. Since no figure is substantiated in a public statement, we have decided to not publish the numbers. We will revise this once the numbers are made public.</td></tr>
###Markdown
Next we must find info of confirmed cases, recovered cases, deaths for each country above. The dataset used will be the [John Hopkins one](https://github.com/CSSEGISandData/COVID19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv).
###Code
def transform_date_to_str(date_str):
"""Transforms the date strings to the John Hopkins one: mm/dd/yy """
if date_str[8] != '0':
return date_str[6:7] + "/" + date_str[8:10] + "/20"
else:
return date_str[6:7] + "/" + date_str[9:10] + "/20"
def read_john_hopkins_dataset(url, column_name):
john_hopkins_df = pd.read_csv(url, index_col=['Country/Region'])
john_hopkins_df = john_hopkins_df.drop('Lat', axis=1)
john_hopkins_df = john_hopkins_df.drop('Long', axis=1)
# We must sum the cases on countries that are displayed by region eg US and China
john_hopkins_df_grouped = john_hopkins_df.groupby(['Country/Region']).sum()
testing_df[column_name] = np.nan
for index, row in testing_df.iterrows():
try:
testing_df.at[index, column_name] = john_hopkins_df_grouped.loc[index][transform_date_to_str(str(row['Date']))]
except KeyError:
# We are recovering from some missing keys
print(index)
# Add confirmed cases column
read_john_hopkins_dataset('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv',
"Confirmed")
# Add recovered cases column
read_john_hopkins_dataset('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Recovered.csv',
"Recovered")
# Add deaths column
read_john_hopkins_dataset('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv',
"Deaths")
display(testing_df)
###Output
_____no_output_____
###Markdown
Having created the dataset we will save it so as we can then import it instead of creating it from scratch having to treat special cases of missing values and future incompatibilities between the two datasets we combine. Important field that should be added on the dataset to create more useful conclusions:* Population* Tests per million
###Code
testing_df.to_csv(path_or_buf='datasets/covid_testing_importance.csv')
###Output
_____no_output_____
###Markdown
Evaluation and ConclusionsHaving our dataset created we will try to infer insights by seeing how the number of tests correlates with the number of cases and the ability of the country to successfully deal with the epidemic. Number Of CasesFirstly, I will calculate the correlation between number of tests and number of cases.
###Code
# Plot the cases-axis, tests axis
fig, ax = plt.subplots()
fig.set_size_inches(5, 5)
ax.set_yscale('log')
ax.set_xscale('log')
ax.scatter(testing_df.Tests, testing_df.Confirmed)
# We do the +0.1 to avoid having 0 values on a log scale, this +1 will not be used on calculating the correlation value
ax.set_xlim([min(testing_df.Tests)+ 0.1,max(testing_df.Tests) + 0.1])
ax.set_ylim([min(testing_df.Confirmed) + 0.1,max(testing_df.Confirmed) + 0.1])
ax.grid(True)
for i, row in enumerate(testing_df.iterrows()):
# Add the annotation of the country name if the country has many tests
# to avoid text cluttering
if testing_df.Tests[i] > 2e4:
ax.annotate(row[0], (testing_df.Tests[i], testing_df.Confirmed[i]))
fig.suptitle('Tests vs Confirmed Cases', fontsize=20)
plt.xlabel('Tests (Log)', fontsize=18)
plt.ylabel('Cases (Log)', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Although this is not a straight line it seems like it has a linear relationship. The use of a **log-log** plot suggests a power law in the form of $Cases=kTests^n$. When a slope on a log-log plot is between 0 and 1, it signifies that the nonlinear effect of the dependent variable lessens as its value increases [(ref)](https://statisticsbyjim.com/regression/log-log-plots/). In our case this means that **as the number of tests increases the rate of discovered cases slows down**.
###Code
# Calculate the correlation value
corr_cases, _ = spearmanr(testing_df.Tests, testing_df.Confirmed)
print('Spearmans correlation: %.3f' % corr_cases)
###Output
Spearmans correlation: 0.718
###Markdown
As expected the Spearman's Correlation of **0.718** is big enough to suggest a positive correlation. Death RatioWe let a new variable $deathRatio = \frac{deaths}{cases}$. This variable could be translated as the effectiveness of the health care system of each country, where the smaller the value the more effective.
###Code
testing_df['DeathRatio'] = testing_df.Deaths / testing_df.Confirmed
testing_df_death_ratio = testing_df[['Tests', 'DeathRatio']].dropna() # Remove null values
# Remove countries with no deaths so as to not clutter the plot
testing_df_death_ratio = testing_df_death_ratio[testing_df_death_ratio['DeathRatio'] != 0]
# Plot the cases-axis, tests axis
fig, ax = plt.subplots()
fig.set_size_inches(10, 10)
ax.scatter(testing_df_death_ratio.Tests, testing_df_death_ratio.DeathRatio)
ax.grid(True)
for i, row in enumerate(testing_df_death_ratio.iterrows()):
ax.annotate(row[0], (testing_df_death_ratio.Tests[i], testing_df_death_ratio.DeathRatio[i]))
fig.suptitle('Tests vs Death Ratio', fontsize=20)
plt.xlabel('Tests', fontsize=18)
plt.ylabel('Death Ratio', fontsize=16)
plt.show()
###Output
_____no_output_____ |
phase2/real project/avoidable death and service barrier/phase2_implementation.ipynb | ###Markdown
Outerlier Analysis
###Code
Avoidable_Death_Total.plot(kind = 'box')
plt.title("Outlier of Total Avoidable death")
plt.ylabel("Number of Avoidable Death", fontsize = 14)
Avoidable_Death.sort_values(['Avoidable_Death_Total'], ascending = False)
###Output
_____no_output_____
###Markdown
Avoid Death Total
###Code
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_Fruit_Adequate_Intake': Health_Risk_Factor_Fruit_Adequate_Intake})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.xlabel('Total avoidable death \nper 100,000 population', fontsize=14)
plt.ylabel('Number of People \n Take Adequate Fruit\n every day per 100 population', fontsize=14)
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_Fruit_Adequate_Intake", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_Obese': Health_Risk_Factor_Obese})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.xlabel('Total avoidable death \n per 100,000 population', fontsize=14)
plt.ylabel('Number of Obese people\n per 100 population', fontsize=14)
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_Obese", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_RiskWaistMearsurement': Health_Risk_Factor_RiskWaistMearsurement})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.xlabel('Total avoidable death \n per 100,000 population', fontsize=14)
plt.ylabel('Number of people of \n risky waist measurement\n per 100 population', fontsize=14)
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_RiskWaistMearsurement", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_Low_Exercise': Health_Risk_Factor_Low_Exercise})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_Low_Exercise", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_Psychological_Distress': Health_Risk_Factor_Psychological_Distress})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_Psychological_Distress", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
df1 = pd.DataFrame({'Avoidable_Death_Total': Avoidable_Death_Total, \
'Health_Risk_Factor_Smoker': Health_Risk_Factor_Smoker})
plt.scatter(df1.iloc[:, 0], df1.iloc[:,1], color = 'r')
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
plt.xlabel('Total avoidable death \n per 100,000 population', fontsize=14)
plt.ylabel('Number of \n excessive smoker\n per 100 population', fontsize=14)
plt.title("Avoidable_Death_Total VS Health_Risk_Factor_Smoker", color = 'w')
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
###Output
Pearson r is 0.7081045347886481
###Markdown
Avoid Death Cancer
###Code
from sklearn.metrics import mutual_info_score
from sklearn.metrics import normalized_mutual_info_score
result = mutual_info_score(df1.iloc[:, 0], df1.iloc[:, 1])
result1 = normalized_mutual_info_score(df1.iloc[:, 0], df1.iloc[:, 1])
print("Mutual Infomation is", result)
print("Normalized Mutual Information is", result1)
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Cancer, Health_Risk_Factor_Fruit_Adequate_Intake, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Cancer': Avoidable_Death_Cancer,\
'Health_Risk_Factor_Fruit_Adequate_Intake':Health_Risk_Factor_Fruit_Adequate_Intake})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Cancer, Health_Risk_Factor_Psychological_Distress, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Cancer': Avoidable_Death_Cancer,\
'Health_Risk_Factor_Psychological_Distress':Health_Risk_Factor_Psychological_Distress})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Cancer, Health_Risk_Factor_Smoker, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Cancer': Avoidable_Death_Cancer,\
'Health_Risk_Factor_Smoker':Health_Risk_Factor_Smoker})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
###Output
Pearson r is 0.505216015934021
###Markdown
Avoidable_Death_Diab
###Code
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Diab, Health_Risk_Factor_Fruit_Adequate_Intake, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Diab': Avoidable_Death_Diab,\
'Health_Risk_Factor_Fruit_Adequate_Intake':Health_Risk_Factor_Fruit_Adequate_Intake})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Diab, Health_Risk_Factor_Low_Exercise, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Diab': Avoidable_Death_Diab,\
'Health_Risk_Factor_Low_Exercise': Health_Risk_Factor_Low_Exercise})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Diab, Health_Risk_Factor_Psychological_Distress, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Diab': Avoidable_Death_Diab,\
'Health_Risk_Factor_Psychological_Distress': Health_Risk_Factor_Psychological_Distress})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
from scipy.stats import linregress
def abline(slope, intercept):
"""Plot a line from slope and intercept"""
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = intercept + slope * x_vals
plt.plot(x_vals, y_vals, '-', color='black')
plt.scatter(Avoidable_Death_Diab, Health_Risk_Factor_Smoker, color = 'r')
df1 = pd.DataFrame({'Avoidable_Death_Diab': Avoidable_Death_Diab,\
'Health_Risk_Factor_Smoker': Health_Risk_Factor_Smoker})
df1 = df1.dropna()
Info = linregress(df1.iloc[:, 0], df1.iloc[:,1])
abline(Info[0], Info[1])
print("Pearson r is ",df1.iloc[:, 0].corr(df1.iloc[:, 1]))
###Output
Pearson r is 0.6066783273140159
|
Facial_Similarity.ipynb | ###Markdown
Contrastive Loss![title](https://cdn-images-1.medium.com/max/1000/1*tDo6545MUvW9t-A8Sd-hHw.png) Dw : euclidean distance Gw : output of sister pair networks m : margin value which is greater than 0.Having a margin indicates that dissimilar pairs that are beyond this margin will not contribute to the loss.
###Code
#Defining the loss
class Contrastive_loss(torch.nn.Module):
'''Contrastive loss function.
Based on: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf'''
def __init__(self, margin=2.0):
super(Contrastive_loss, self).__init__()
self.margin = margin
def forward(self, output1, output2, label):
euclidean_distance = F.pairwise_distance(output1, output2, keepdim = True)
loss_contrastive = torch.mean((1-label) * torch.pow(euclidean_distance, 2) +
(label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))
return loss_contrastive
#Defining the batch_size and epochs
batch_size = 64
num_epochs = 200
#Defning the data loader
train_loader = DataLoader(dataset , batch_size = batch_size , shuffle = True)
#Defining the loss and optimizer
net = Siamese_net()
if use_cuda:
net = net.cuda()
print(net)
criterion = Contrastive_loss()
optimizer = optim.Adam(net.parameters() , lr = 0.0005)
iteration_num = 0
counter = []
loss_history = []
#Training time
for epoch in range(num_epochs):
for i , data in enumerate(train_loader , 0):
img0 , img1 , label = data
if use_cuda:
img0 , img1 , label = img0.cuda() , img1.cuda() , label.cuda()
optimizer.zero_grad()
op1 , op2 = net(img0 , img1)
loss = criterion(op1 , op2 , label)
loss.backward()
optimizer.step()
if i%10 == 0:
print("Epoch : {} || Contrastive Loss : {}".format(epoch , loss.item()))
iteration_num += 10
counter.append(iteration_num)
loss_history.append(loss.item())
plot_loss(counter , loss_history)
#Time for testing
test_data_folder = dset.ImageFolder(root = testing_dir)
transform = transforms.Compose([transforms.Resize((100,100)) , transforms.ToTensor()])
test_dataset = CustomDataset(test_data_folder , transform = transform, should_invert = False)
test_loader = DataLoader(test_dataset , batch_size = 1 , shuffle = True)
dataiter = iter(test_loader)
x0 , _ ,_ = next(dataiter)
for i in range(15):
_, x1 , label = next(dataiter)
cc = torch.cat((x0,x1) , 0)
op1 , op2 = net(Variable(x0).cuda() , Variable(x1).cuda())
eu_dist = F.pairwise_distance(op1 , op2)
image_show(vutils.make_grid(cc) , 'Dissimilarity Score : {:2f}'.format(eu_dist.item()))
###Output
_____no_output_____ |
notebooks/part-02-what-are-GANs/00-what-nn.ipynb | ###Markdown
Let's build a simple numpy-based NN to classify MNIST Upload the dataset
###Code
import h5py
import numpy as np
# load MNIST data
MNIST_data = h5py.File("../../data/MNIST/MNISTdata.hdf5", 'r')
x_train = np.float32(MNIST_data['x_train'][:])
y_train = np.int32(np.array(MNIST_data['y_train'][:, 0])).reshape(-1, 1)
x_test = np.float32(MNIST_data['x_test'][:])
y_test = np.int32(np.array(MNIST_data['y_test'][:, 0])).reshape(-1, 1)
MNIST_data.close()
# stack together for next step
X = np.vstack((x_train, x_test))
y = np.vstack((y_train, y_test))
# one-hot encoding
digits = 10
examples = y.shape[0]
y = y.reshape(1, examples)
Y_new = np.eye(digits)[y.astype('int32')]
Y_new = Y_new.T.reshape(digits, examples)
# number of training set
m = 60000
m_test = X.shape[0] - m
X_train, X_test = X[:m].T, X[m:].T
Y_train, Y_test = Y_new[:, :m], Y_new[:, m:]
# shuffle training set
shuffle_index = np.random.permutation(m)
X_train, Y_train = X_train[:, shuffle_index], Y_train[:, shuffle_index]
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
assert X_train.shape[1] == Y_train.shape[1]
assert X_test.shape[1] == Y_test.shape[1]
###Output
_____no_output_____
###Markdown
Weight InitializationThe outputs from a randomly initialized neuron has a variance that grows with the number of inputs. It turns out that we can normalize the variance of each neuron’s output to $1$ by scaling its weight vector by the square root of its number of inputs. That is, the recommended heuristic is to initialize each neuron’s weight vector as: $w = \frac{\mathbb{N}(n)} {\sqrt(n)}$, where $ n $ is the number of its inputs.This ensures that all neurons in the network initially have approximately the same output distribution and empirically improves the rate of convergence.
###Code
classes = 10
input_n = 784 # (28 x 28)
hidden_n = 64
params = {"W1": np.random.randn(hidden_n, input_n) * np.sqrt(1. / input_n),
"b1": np.zeros((hidden_n, 1)) * np.sqrt(1. / input_n),
"W2": np.random.randn(classes, hidden_n) * np.sqrt(1. / hidden_n),
"b2": np.zeros((classes, 1)) * np.sqrt(1. / hidden_n)}
###Output
_____no_output_____
###Markdown
Activation FunctionsThis curve has a finite limit of:‘0’ as $ x $ approaches $ - \infty $‘1’ as $ x $ approaches $ + \infty $
###Code
def sigmoid(z):
s = 1. / (1. + np.exp(-z))
return s
def softmax(z):
return np.exp(z) / np.sum(np.exp(z), axis=0, keepdims=True)
%matplotlib inline
import matplotlib.pyplot as plt
x = np.arange(-20., 20., 0.2)
y = sigmoid(x)
plt.plot(x, y)
###Output
_____no_output_____
###Markdown
Loss FunctionWhat we use in this example is cross-entropy loss.After averaging over a training set of $ m $ examples.
###Code
def loss(Y, Y_hat):
L_sum = np.sum(np.multiply(Y, np.log(Y_hat)))
m = Y.shape[1]
L = -(1./m) * L_sum
return L
###Output
_____no_output_____
###Markdown
Training step, Forward pass and back propagation Forward passThe forward pass on a single example $x$ executes the following computation on each layer of Neural Networks:
###Code
def feed_forward(X, params):
"""
feed forward network: 2layer neural net
inputs:
params: dictionary contains all the weights and biases
return:
cache: a dictionary contains all the fully connected units and activations
"""
cache = {}
# Z1 = W1.dot(x) + b1
cache["Z1"] = np.matmul(params["W1"], X) + params["b1"]
# A1 = sigmoid(Z1)
cache["A1"] = sigmoid(cache["Z1"])
# Z2 = W2.dot(A1) + b2
cache["Z2"] = np.matmul(params["W2"], cache["A1"]) + params["b2"]
# A2 = softmax(Z2)
cache["A2"] = softmax(cache["Z2"])
return cache
###Output
_____no_output_____
###Markdown
Back PropagationBack propagation is actually a fancy name of chain rules. Backpropagation is based around four fundamental equations. Together, those equations give us a way of computing both the error $\delta^l$ and the gradient of the cost function.An equation for the error in the output layer, $\delta^L$:\begin{eqnarray} \delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma'(z^L_j).\tag{BP1}\end{eqnarray}An equation for the error $\delta^l$ in terms of the error in the next layer, $\delta^{l+1}$:\begin{eqnarray} \delta^l = ((w^{l+1})^T \delta^{l+1}) \odot \sigma'(z^l),\tag{BP2}\end{eqnarray}An equation for the rate of change of the cost with respect to any bias in the network:\begin{eqnarray} \frac{\partial C}{\partial b^l_j} = \delta^l_j.\tag{BP3}\end{eqnarray}An equation for the rate of change of the cost with respect to any weight in the network:\begin{eqnarray} \frac{\partial C}{\partial w^l_{jk}} = a^{l-1}_k \delta^l_j.\tag{BP4}\end{eqnarray} How backpropagation works: Check out http://neuralnetworksanddeeplearning.com/chap2.html Great introduction with intuitive interpretation!
###Code
def back_propagate(X, Y, params, cache, m_batch):
"""
back propagation
inputs:
params: dictionary contains all the weights and biases
cache: dictionary contains all the fully connected units and activations
return:
grads: dictionary contains the gradients of corresponding weights and biases
"""
# error at last layer
dZ2 = cache["A2"] - Y
# gradients at last layer (Py2 need 1. to transform to float)
dW2 = (1. / m_batch) * np.matmul(dZ2, cache["A1"].T)
db2 = (1. / m_batch) * np.sum(dZ2, axis=1, keepdims=True)
# back propagate through first layer
dA1 = np.matmul(params["W2"].T, dZ2)
dZ1 = dA1 * sigmoid(cache["Z1"]) * (1 - sigmoid(cache["Z1"]))
# gradients at first layer (Py2 need 1. to transform to float)
dW1 = (1. / m_batch) * np.matmul(dZ1, X.T)
db1 = (1. / m_batch) * np.sum(dZ1, axis=1, keepdims=True)
grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2}
return grads
###Output
_____no_output_____
###Markdown
Training and Hyper parameter setupTraining process can be simplified as a loop forward pass -> compute loss -> back propagation -> update weights and bias -> forward pass
###Code
X_train.shape
batch_size = 64
epochs = 100
lr = 0.01
batches = int(X_train.shape[1]/64)
history_loss_train = []
for i in range(epochs):
# shuffle training set
permutation = np.random.permutation(X_train.shape[1])
x_train_shuffled = X_train[:, permutation]
y_train_shuffled = Y_train[:, permutation]
for j in range(batches):
# get mini-batch
begin = j * batch_size
end = min(begin + batch_size, X_train.shape[1] - 1)
X = x_train_shuffled[:, begin:end]
Y = y_train_shuffled[:, begin:end]
m_batch = end - begin
# forward and backward
cache = feed_forward(X, params)
grads = back_propagate(X, Y, params, cache, m_batch)
# gradient descent
params["W1"] = params["W1"] - lr * grads["dW1"] # dW1
params["b1"] = params["b1"] - lr * grads["db1"] #db1
params["W2"] = params["W2"] - lr * grads["dW2"] #dW2
params["b2"] = params["b2"] - lr * grads["db2"] #db2
# forward pass on training set
cache = feed_forward(X_train, params)
train_loss = loss(Y_train, cache["A2"])
history_loss_train.append(train_loss)
# forward pass on test set
cache = feed_forward(X_test, params)
test_loss = loss(Y_test, cache["A2"])
print("Epoch {}: training loss = {}, test loss = {}".format(
i + 1, train_loss, test_loss))
###Output
Epoch 1: training loss = 1.623204380193469, test loss = 1.6113645849504843
Epoch 2: training loss = 1.0959083488410803, test loss = 1.0781490074095392
Epoch 3: training loss = 0.8266707391291508, test loss = 0.8079428792725178
Epoch 4: training loss = 0.6824468154255389, test loss = 0.6638398415483551
Epoch 5: training loss = 0.5946645526027842, test loss = 0.5762846175601509
Epoch 6: training loss = 0.5360955586482143, test loss = 0.5180131832685556
Epoch 7: training loss = 0.49410086367875, test loss = 0.47631394996472376
Epoch 8: training loss = 0.46238275985059424, test loss = 0.44460446112276836
Epoch 9: training loss = 0.4381516311528761, test loss = 0.42087858725859145
Epoch 10: training loss = 0.41852581519452364, test loss = 0.4016330446851611
Epoch 11: training loss = 0.4023785909182574, test loss = 0.3858861640289548
Epoch 12: training loss = 0.3891534397888373, test loss = 0.37307861117976704
Epoch 13: training loss = 0.3775934111408165, test loss = 0.36232356529689125
Epoch 14: training loss = 0.36772712286599335, test loss = 0.3526641770576827
Epoch 15: training loss = 0.3591008407370659, test loss = 0.3449188951873443
Epoch 16: training loss = 0.351566992555923, test loss = 0.33733153755875794
Epoch 17: training loss = 0.34459859536126075, test loss = 0.33115725474301466
Epoch 18: training loss = 0.3385478963097311, test loss = 0.32545507129499374
Epoch 19: training loss = 0.33272230970119615, test loss = 0.32004451200808715
Epoch 20: training loss = 0.3274815878703693, test loss = 0.3150559292206987
Epoch 21: training loss = 0.3228704886551837, test loss = 0.31066309478658044
Epoch 22: training loss = 0.3182691994788282, test loss = 0.30661651499180476
Epoch 23: training loss = 0.3139994434091194, test loss = 0.3025475112905915
Epoch 24: training loss = 0.31025789488523114, test loss = 0.299496782436225
Epoch 25: training loss = 0.3064060623587297, test loss = 0.29574636896859485
Epoch 26: training loss = 0.3028955771171927, test loss = 0.29281432854858086
Epoch 27: training loss = 0.2995746514602135, test loss = 0.28951437704322547
Epoch 28: training loss = 0.29643285131422675, test loss = 0.2869070049036613
Epoch 29: training loss = 0.29349748376851126, test loss = 0.28426889977130776
Epoch 30: training loss = 0.2905199089596962, test loss = 0.2814574427037577
Epoch 31: training loss = 0.28766773784147237, test loss = 0.278578039998825
Epoch 32: training loss = 0.28486736265576856, test loss = 0.27647176611389923
Epoch 33: training loss = 0.2822185920834213, test loss = 0.2742530447527963
Epoch 34: training loss = 0.27965136640114424, test loss = 0.27202310283795256
Epoch 35: training loss = 0.2772327501225555, test loss = 0.26958672836673997
Epoch 36: training loss = 0.27474085799177717, test loss = 0.2674394546615231
Epoch 37: training loss = 0.2723886695137012, test loss = 0.26528132949066036
Epoch 38: training loss = 0.2701085160307339, test loss = 0.26313920144440905
Epoch 39: training loss = 0.267925097326937, test loss = 0.2616066430245368
Epoch 40: training loss = 0.2656765892006827, test loss = 0.25926969263254057
Epoch 41: training loss = 0.26346215007999446, test loss = 0.257311232656733
Epoch 42: training loss = 0.26150167725624096, test loss = 0.2553084485247569
Epoch 43: training loss = 0.25931234499426487, test loss = 0.25377159747845296
Epoch 44: training loss = 0.25727299371689033, test loss = 0.2519615799108455
Epoch 45: training loss = 0.2552794591959232, test loss = 0.25041149594926354
Epoch 46: training loss = 0.2533532004249513, test loss = 0.24841083346130738
Epoch 47: training loss = 0.2515231859821616, test loss = 0.24693088494183846
Epoch 48: training loss = 0.24952348503363142, test loss = 0.24519265280261154
Epoch 49: training loss = 0.24767712757542956, test loss = 0.2432790698598587
Epoch 50: training loss = 0.24570913732533467, test loss = 0.24181748837941774
Epoch 51: training loss = 0.2438496431102163, test loss = 0.2399088584100252
Epoch 52: training loss = 0.24205667640201256, test loss = 0.23824703754944865
Epoch 53: training loss = 0.2402420102423933, test loss = 0.23674048264346562
Epoch 54: training loss = 0.23853895501323605, test loss = 0.23530448653516883
Epoch 55: training loss = 0.2368688234350297, test loss = 0.23358046482125658
Epoch 56: training loss = 0.23509428532912008, test loss = 0.2321120967103418
Epoch 57: training loss = 0.23371680707461293, test loss = 0.2310780066818638
Epoch 58: training loss = 0.2317293502893233, test loss = 0.22894884860576614
Epoch 59: training loss = 0.23016501212226528, test loss = 0.22754922899718577
Epoch 60: training loss = 0.22853734487427768, test loss = 0.22627095902966043
Epoch 61: training loss = 0.22707427031272218, test loss = 0.22457920381738264
Epoch 62: training loss = 0.22556975824662254, test loss = 0.22342348245175897
Epoch 63: training loss = 0.22394016978587003, test loss = 0.22214378451809072
Epoch 64: training loss = 0.22240119884035103, test loss = 0.22006510025665185
Epoch 65: training loss = 0.22080130371365642, test loss = 0.2191556871375896
Epoch 66: training loss = 0.21934927095481174, test loss = 0.21750881442399947
Epoch 67: training loss = 0.2178467928192313, test loss = 0.21627573863046223
Epoch 68: training loss = 0.21656956422352497, test loss = 0.21533207396738813
Epoch 69: training loss = 0.21508849204486555, test loss = 0.21372939943445154
Epoch 70: training loss = 0.21370827505579276, test loss = 0.21225001599296872
Epoch 71: training loss = 0.21225320443110343, test loss = 0.21144524543831658
Epoch 72: training loss = 0.2108885002974858, test loss = 0.2102776562763972
Epoch 73: training loss = 0.20954210064800713, test loss = 0.2087756764993586
Epoch 74: training loss = 0.208194835808494, test loss = 0.20761550860818484
Epoch 75: training loss = 0.20706684437003592, test loss = 0.20648573570030354
Epoch 76: training loss = 0.205624963775563, test loss = 0.205096640239443
Epoch 77: training loss = 0.20438653907990292, test loss = 0.20426750502243504
Epoch 78: training loss = 0.20311977679491527, test loss = 0.2027800387908222
Epoch 79: training loss = 0.20182033645409317, test loss = 0.2017654254690189
Epoch 80: training loss = 0.2005871946954701, test loss = 0.20055084601284986
Epoch 81: training loss = 0.19939827519673903, test loss = 0.1995667893531613
Epoch 82: training loss = 0.19824643291696187, test loss = 0.19856482845564033
Epoch 83: training loss = 0.19701123703709778, test loss = 0.1974063086370404
Epoch 84: training loss = 0.1959129722558054, test loss = 0.19624650081959166
Epoch 85: training loss = 0.1947552009834779, test loss = 0.19510139015966296
Epoch 86: training loss = 0.19358303338358793, test loss = 0.1941810905892786
Epoch 87: training loss = 0.19242753647297092, test loss = 0.19303890874966217
Epoch 88: training loss = 0.19135238457346537, test loss = 0.1922042558615807
Epoch 89: training loss = 0.1902226439919596, test loss = 0.1910884036623716
Epoch 90: training loss = 0.18917133039844, test loss = 0.1900239652532282
Epoch 91: training loss = 0.18816289772771091, test loss = 0.18923456213216158
Epoch 92: training loss = 0.1870551522737503, test loss = 0.18825794740338042
Epoch 93: training loss = 0.18611841619640812, test loss = 0.18767239229974655
Epoch 94: training loss = 0.1850060904345, test loss = 0.18645013092181048
Epoch 95: training loss = 0.18391732556251747, test loss = 0.18539911622916957
Epoch 96: training loss = 0.18298730458387677, test loss = 0.18439692364437568
Epoch 97: training loss = 0.18201993841394323, test loss = 0.18388391986140576
Epoch 98: training loss = 0.18105105508679187, test loss = 0.18281725473643523
Epoch 99: training loss = 0.1799979375810235, test loss = 0.18172077248659857
Epoch 100: training loss = 0.1790280688337282, test loss = 0.18093950914387927
###Markdown
Evaluation
###Code
plt.plot(len(history_loss_train)), history_loss_train)
# TODO move to utils.py
def onehot_softmax(out):
"""convert output of softmax to onehot encoding"""
for
one_hot = np.zeros_like(out)
one_hot[np.argmax(out)] = 1
return one_hot
out = feed_forward(X_test, params)["A2"]
out_one_hot = np.apply_along_axis(onehot_softmax, axis=1, arr=out.T)
from sklearn.metrics import classification_report
print(classification_report(Y_test.T, out_one_hot))
###Output
precision recall f1-score support
0 0.96 0.99 0.97 980
1 0.97 0.98 0.98 1135
2 0.95 0.94 0.94 1032
3 0.93 0.95 0.94 1010
4 0.94 0.95 0.94 982
5 0.95 0.91 0.93 892
6 0.95 0.96 0.95 958
7 0.95 0.94 0.95 1028
8 0.94 0.93 0.93 974
9 0.93 0.93 0.93 1009
micro avg 0.95 0.95 0.95 10000
macro avg 0.95 0.95 0.95 10000
weighted avg 0.95 0.95 0.95 10000
samples avg 0.95 0.95 0.95 10000
|
Storage.ipynb | ###Markdown
Goal Build all pinning services into more abstract API to upload and download dataset* NFT Storage* Pinata* Estuary
###Code
import pandas as pd
from helpers.helper import read_file
###Output
_____no_output_____
###Markdown
NFT Storage
###Code
from storage.nftstorage import NFTStorage
nft_storage = NFTStorage()
cred = nft_storage.get_creds()
w = open("dataset/sample/tabular-epochs100_ntp5_tw3.pkl","rb")
response,status = nft_storage.upload_file(cred,"dataset/sample/tabular-epochs100_ntp5_tw3.pkl")
check_file = nft_storage.get_file(response["value"]["cid"],cred)
# nft_storage.unpin("bafybeie4yihf42kahspmhqkmovpxt2iliqqikbriwsglb6reujvzsnp5ke",cred)
###Output
_____no_output_____
###Markdown
IPFS
###Code
from storage.ipfs import IPFS
# from helpers.helper import read_file
ipfs = IPFS()
# w = ipfs.add("All_Gateway_Properties.csv")
# w.json()
from storage.ipfs import IPFS
# from helpers.helper import read_file
ipfs = IPFS()
f_name, log = ipfs.get_file(response["value"]["cid"],local_node=False)
import pickle
string_of_bytes_obj = str(pickle.dumps(f_name.content), encoding="latin1")
unpickled_dict = pickle.loads(bytes(string_of_bytes_obj, "latin1"))
import torch
from io import BytesIO
torch.load(BytesIO(unpickled_dict),map_location=torch.device('cpu'))
read_file(response)
import pandas as pd
df_peers = pd.DataFrame(ipfs.get_peers()["Peers"]).dropna()
df_peers["Latency"] = df_peers["Latency"].replace("n/a",None)
df_peers["Latency_float"] = df_peers["Latency"].str.strip("ms").astype("float64")
df_peers["Stream_Count"] = df_peers["Streams"].apply(lambda x: len(x))
peers = df_peers["Peer"].to_list()
peers[0]
for peer in peers:
dht = ipfs.get_dht(peer)
with open(f"dht/dht_{peer}.pkl","w") as f:
f.writelines(dht.text)
import json
import glob
df = pd.DataFrame()
for dht_f in glob.glob("dht/dht_*"):
with open(dht_f,"r") as f:
d = f.readlines()
clean = [json.loads(i.strip()) for i in d]
df = pd.concat([df,pd.DataFrame(clean)])
df.to_csv("snapshot1.csv",index=False)
df_dhtpeers = pd.DataFrame(dht_file,columns=["peers"])
df_dhtpeers
ipfs.dht_get_file("bafkreiesap4b4a4xgvfxhsz6hdza7atezclk4op3iz5lusu5xxx7542ns4")
_ = df_peers["Latency_float"].hist()
df_peers.describe()
df_peers
###Output
_____no_output_____
###Markdown
Pinata
###Code
from storage.pinataV1 import PinataV1
pinata = PinataV1()
cred = pinata.get_creds()
pinata.upload_file("dataset/sample/All_Gateway_Properties.csv",cred)
pinata.pin(cred,response["value"]["cid"],fn="tabular-epochs100_ntp5_tw3.pkl")
pinata.unpin("QmQwwMbykhX3wyCdC5yBRXV4JQRe5FFLiyynaUiLbmvLzn",cred)
pinata.edit_hash(cred,"Popcorn Limited - Dune Dashboards - 03182022.pdf","bafkreief472lhr6b54baq4oansiwor4ed2te2xztwgjqez2btrns5d6dxi")
f_name = ipfs.get_file("QmQwwMbykhX3wyCdC5yBRXV4JQRe5FFLiyynaUiLbmvLzn",local_node=False)
pinata.pin_policy("bafybeif36uqfqj5on23qfdl3nglc7ncgcbnz4ixpleomskg7pqn3xmlgfi",cred,region="FRA1",replications=1)
pinata.globalpin_policy(cred,region="FRA1",replications=1)
###Output
_____no_output_____
###Markdown
Request Files
###Code
import pandas as pd
"""
"sort" - Sort the results by the date added to the pinning queue (see value options below)
"ASC" - Sort by ascending dates
"DESC" - Sort by descending dates
"status" - Filter by the status of the job in the pinning queue (see potential statuses below)
"prechecking" - Pinata is running preliminary validations on your pin request.
"searching" - Pinata is actively searching for your content on the IPFS network. This may take some time if your content is isolated.
"retrieving" - Pinata has located your content and is now in the process of retrieving it.
"expired" - Pinata wasn't able to find your content after a day of searching the IPFS network. Please make sure your content is hosted on the IPFS network before trying to pin again.
"over_free_limit" - Pinning this object would put you over the free tier limit. Please add a credit card to continue pinning content.
"over_max_size" - This object is too large of an item to pin. If you're seeing this, please contact us for a more custom solution.
"invalid_object" - The object you're attempting to pin isn't readable by IPFS nodes. Please contact us if you receive this, as we'd like to better understand what you're attempting to pin.
"bad_host_node" - You provided a host node that was either invalid or unreachable. Please make sure all provided host nodes are online and reachable.
"ipfs_pin_hash" - Retrieve the record for a specific IPFS hash
"limit" - Limit the amount of results returned per page of results (default is 5, and max is 1000)
"offset" - Provide the record offset for records being returned. This is how you retrieve records on additional pages (default is 0)
MetaData
metadata[name]=exampleName
&metadata[keyvalues]={"exampleKey":{"value":"exampleValue","op":"exampleOp"},"exampleKey2":{"value":"exampleValue2","op":"exampleOp2"}}
"""
params = {"sort":"DESC","status":None,"prechecking":None,
"searching":None,"retrieving":None,
"expired":None,"over_free_limit":None,
"over_max_size":None,"invalid_object":None,
"bad_host_node":None,"ipfs_pin_hash":None,
"limit":None,"offset":None,"metadata[name]":None,
"metadatakeyvalues":"keyvalues"
}
pf,status = pinata.get_pinned_files(cred,params)
pd.DataFrame(pf["rows"])
###Output
_____no_output_____
###Markdown
Requesting Pin Jobs
###Code
pf,status = pinata.get_pinned_jobs(cred,params)
pf
pinata.get_datausage(cred)
#convert bytes to MB
2103065 * 10**-6
###Output
_____no_output_____
###Markdown
Work in Progress Pinata V2
###Code
# import requests module
import requests
import json
from requests.auth import HTTPBasicAuth
class PinataV2:
def get_creds(self):
with open("creds.json") as f:
cred = json.loads(f.read())
return cred["Pinata"]
def upload_file(self,fn,cred):
url = "https://managed.mypinata.cloud/api/v1/content"
payload={}
headers = {
'x-api-key': cred['API Key']
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
return response.json(),response.status_code
pinatav2 = PinataV2()
cred = pinatav2.get_creds()
pinatav2.upload_file("storage/ipfs.py",cred)
###Output
_____no_output_____
###Markdown
Estuary
###Code
import requests
import json
class Estuary:
def get_creds(self):
with open("creds.json") as f:
cred = json.loads(f.read())
return cred["Estuary"]
def upload_file(self,fn,cred):
base_url1 = "https://shuttle-4.estuary.tech/content/add"
base2 = "https://shuttle-5.estuary.tech/content/add"
header = {"Content-Type":"multipart/form-data",
"Authorization":"Bearer " + cred["API Key"],
}
#Filename,file, content_type, cookie expiration?
files = {'file': (fn, open(fn, 'rb'),
"multipart/form-data", {'Expires': '0'})
}
response = requests.post(base_url1, headers=header,data=files)
return response.json(),response.status_code
def get_pins(self,cred):
base_url = "https://api.estuary.tech/pinning/pins"
header = {"Authorization": "Bearer " + cred["API Key"]}
response = requests.get(base_url,headers=header)
return response.json(),response.status_code
e = Estuary()
cred = e.get_creds()
pins = e.get_pins(cred)
e.upload_file("dataset/sample/All_Gateway_Properties.csv",cred)
###Output
_____no_output_____ |
SKETCHER.ipynb | ###Markdown
sketch generating function
###Code
def sketch(image):
gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
gray_image_blur = cv2.GaussianBlur(gray_image, (1,1), 0)
gray_image_blur = cv2.bilateralFilter(gray_image_blur, 1, 75, 75)
# value belov threshold 1 is not an edge
# valye between threshold 1 and 2 can be edge if it is
# vale above threshold 2 is an edge
canny_edges = cv2.Canny(gray_image_blur, 10, 70)
ret, mask = cv2.threshold(canny_edges, 70, 255, cv2.THRESH_BINARY_INV)
return mask
cap_vid = cv2.VideoCapture(0)
while True:
ret, frame = cap_vid.read()
cv2.imshow('Grey-Scale Sketcher Live', sketch(frame))
if cv2.waitKey(1) == 13:
break
cap_vid.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
notebooks/MEPG.ipynb | ###Markdown
CARTPOLE (non-safe) seeds = [2030, 4473, 5759, 5756, 4146, 1428, 9723, 3212, 8589, 1971]
###Code
os.chdir('/home/matteo/policy-optimization/results/sepg/cartpole_unsafe')
###Output
_____no_output_____
###Markdown
* sigmainit = 0.5
###Code
nu.compare('contcartpole',
['LOWGPOMDPf', 'LOWNAIVEf', 'LOWMEPGf', 'entropyf'],
keys=['UPerf', 'Exploration'],
separate=False,
conf=.99)
nu.save_csv('contcartpole', 'LOWGPOMDPf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'LOWNAIVEf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'LOWMEPGf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'entropyf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'LOWGPOMDPf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'LOWNAIVEf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'LOWMEPGf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'entropyf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.compare('contcartpole',
['highmepglong'],
keys=['UPerf', 'Exploration'],
separate=False, nrows=1000,
conf=.99)
nu.save_csv('contcartpole', 'highmepglong', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots', rows=1000)
nu.save_csv('contcartpole', 'highmepglong', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots', rows=1000)
###Output
_____no_output_____
###Markdown
* sigmainit = 5.0
###Code
nu.compare('contcartpole',
['HIGHGPOMDPf', 'HIGHNAIVEf', 'HIGHMEPGf', 'highentropyf'],
keys=['UPerf', 'Exploration'],
separate=False,
conf=.99)
nu.save_csv('contcartpole', 'HIGHGPOMDPf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'HIGHNAIVEf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'HIGHMEPGf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'highentropyf', 'UPerf', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'HIGHGPOMDPf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'HIGHNAIVEf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'HIGHMEPGf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
nu.save_csv('contcartpole', 'highentropyf', 'Exploration', path='/home/matteo/budget_paper/AISTATS2020/plots')
###Output
_____no_output_____ |
Kishan_ML_K_MeansClustering.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler
import seaborn as sns
%matplotlib inline
cust = pd.read_csv("customers.csv")
cust
cust.isnull().sum()
cust.duplicated().sum()
cust.dtypes
def statistics(variable):
if variable.dtype == "int64" or variable.dtype == "float64":
return pd.DataFrame([[variable.name, np.mean(variable), np.std(variable), np.median(variable), np.var(variable)]],
columns = ["Variable", "Mean", "Standard Deviation", "Median", "Variance"]).set_index("Variable")
else:
return pd.DataFrame(variable.value_counts())
spending = cust["Spending Score (1-100)"]
statistics(spending)
sns.distplot(cust["Spending Score (1-100)"], bins=10, kde_kws={"lw": 1.5, "alpha":0.8, "color":list(map(float, np.random.rand(3,)))},
hist_kws={"linewidth": 1.5, "edgecolor": "grey",
"alpha": 0.4, "color":list(map(float, np.random.rand(3,)))})
income = cust["Annual Income (k$)"]
statistics(income)
sns.distplot(cust["Annual Income (k$)"], bins=10, kde_kws={"lw": 1.5, "alpha":0.8, "color":list(map(float, np.random.rand(3,)))},
hist_kws={"linewidth": 1.5, "edgecolor": "grey",
"alpha": 0.4, "color":list(map(float, np.random.rand(3,)))})
gender = cust["Gender"]
statistics(gender)
gender = pd.DataFrame(cust["Gender"])
sns.catplot(x=gender.columns[0], kind="count", palette="spring", data=gender)
dummies = pd.get_dummies(cust['Gender'])
dummies
cust = cust.merge(dummies, left_index=True, right_index=True)
cust
new_cust = cust.iloc[:,2:]
new_cust
wcss = []
for i in range(1,11):
km = KMeans(n_clusters=i,init='k-means++', max_iter=300, n_init=10, random_state=0)
km.fit(new_cust)
wcss.append(km.inertia_)
plt.plot(range(1,11),wcss, c="#c51b7d")
plt.title('Elbow Method', size=14)
plt.xlabel('Number of clusters', size=12)
plt.ylabel('wcss', size=14)
plt.show()
kmeans = KMeans(n_clusters=5, init='k-means++', max_iter=10, n_init=10, random_state=0)
kmeans.fit(new_cust)
centroids = pd.DataFrame(kmeans.cluster_centers_, columns = ["Age", "Annual Income", "Spending", "Male", "Female"])
centroids.index_name = "ClusterID"
centroids["ClusterID"] = centroids.index
centroids = centroids.reset_index(drop=True)
centroids
X_new = np.array([[43, 76, 56, 0, 1]])
new_customer = kmeans.predict(X_new)
print(f"The new customer belongs to segment(Cluster) {new_customer[0]}")
###Output
_____no_output_____ |
1_Pandas Introduction.ipynb | ###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 2 - Pandas IntroductionAll questions are weighted the same in this assignment. Part 1The following code loads the olympics dataset (olympics.csv), which was derrived from the Wikipedia entry on [All Time Olympic Games Medals](https://en.wikipedia.org/wiki/All-time_Olympic_Games_medal_table), and does some basic data cleaning. The columns are organized as of Summer games, Summer medals, of Winter games, Winter medals, total number of games, total of medals. Use this dataset to answer the questions below.
###Code
import pandas as pd
df = pd.read_csv('olympics.csv', index_col=0, skiprows=1)
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold'+col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver'+col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze'+col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#'+col[1:]}, inplace=True)
names_ids = df.index.str.split('\s\(') # split the index by '('
df.index = names_ids.str[0] # the [0] element is the country name (new index)
df['ID'] = names_ids.str[1].str[:3] # the [1] element is the abbreviation or ID (take first 3 characters from that)
df = df.drop('Totals')
df.head()
###Output
_____no_output_____
###Markdown
Question 0 (Example)What is the first country in df?*This function should return a Series.*
###Code
# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
# This function returns the row for Afghanistan, which is a Series object. The assignment
# question description will tell you the general format the autograder is expecting
return df.iloc[0]
# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Which country has won the most gold medals in summer games?*This function should return a single string value.*
###Code
def answer_one():
# return index of rows in which summer gold count = max(summer gold count)
return df.loc[df['Gold']==df['Gold'].max()].index[0]
answer_one()
###Output
_____no_output_____
###Markdown
Question 2Which country had the biggest difference between their summer and winter gold medal counts?*This function should return a single string value.*
###Code
def answer_two():
# create a column containing the biggest difference between their summer and winter gold medal counts
df['diff']=df['Gold']-df['Gold.1']
# return index of rows in which this difference = max(all the differences)
return df.loc[df['diff']==df['diff'].max()].index[0]
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Which country has the biggest difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count? $$\frac{Summer~Gold - Winter~Gold}{Total~Gold}$$Only include countries that have won at least 1 gold in both summer and winter.*This function should return a single string value.*
###Code
def answer_three():
# extract rows where country has won at least 1 gold in both summer and winter
df_1=df.copy()
df_1=df_1[(df_1['Gold']>0) & (df_1['Gold.1']>0)]
# create a column containing difference between their summer gold medal counts and winter gold medal counts relative to their total gold medal count
df_1['ratio']=df_1['diff']/(df_1['Gold']+df_1['Gold.1'])
# return index of rows in which this ratio = max(all the ratios)
return df_1.loc[df_1['ratio']==df_1['ratio'].max()].index[0]
answer_three()
###Output
_____no_output_____
###Markdown
Question 4Write a function that creates a Series called "Points" which is a weighted value where each gold medal (`Gold.2`) counts for 3 points, silver medals (`Silver.2`) for 2 points, and bronze medals (`Bronze.2`) for 1 point. The function should return only the column (a Series object) which you created.*This function should return a Series named `Points` of length 146*
###Code
def answer_four():
# create a column containing these calculated points for all observations
df['Points']=df['Gold.2']*3+df['Silver.2']*2+df['Bronze.2']
# return the column (as a series)
return df['Points']
answer_four()
###Output
_____no_output_____
###Markdown
Part 2For the next set of questions, we will be using census data from the [United States Census Bureau](http://www.census.gov/popest/data/counties/totals/2015/CO-EST2015-alldata.html). Counties are political and geographic subdivisions of states in the United States. This dataset contains population data for counties and states in the US from 2010 to 2015. [See this document](http://www.census.gov/popest/data/counties/totals/2015/files/CO-EST2015-alldata.pdf) for a description of the variable names.The census dataset (census.csv) should be loaded as census_df. Answer questions using this as appropriate. Question 5Which state has the most counties in it? (hint: consider the sumlevel key carefully! You'll need this for future questions too...)*This function should return a single string value.*
###Code
census_df = pd.read_csv('census.csv')
census_df.head()
def answer_five():
# create a set of all the unique values of STATE values
st_set = set(census_df['STATE'])
# create a list of STATE values (as they appear in the dataframe)
st_list = list(census_df['STATE'])
result={} # this dictionary will store the number of counties in each state
for item in st_set: # iterating over each unique STATE value
result[item]=st_list.count(item) # count the number of 'item' in the list
# convert the dictionary into a dataframe
x = pd.DataFrame([result])
y=x.T # transpose of the dataframe
#gather the index of each observation into a new variable
y['state']=y.index
# save the index for the state with maximum number of counties
state_code=y['state'][y[0]==y[0].max()].index[0]
# get the index for this statecode in the original dataframe
i= census_df[(census_df['SUMLEV']==40) & (census_df['STATE']==state_code)]['STNAME'].index[0]
# get the name of the state from this index
return census_df.iloc[i]['STNAME']
answer_five()
###Output
_____no_output_____
###Markdown
Question 6Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? Use `CENSUS2010POP`.*This function should return a list of string values.*
###Code
def answer_six():
# list of column names to extract from dataframe
list_out=['STATE','STNAME','CTYNAME','CENSUS2010POP']
# extract county observations into a new dataframe
census=census_df[census_df['SUMLEV']==50]
# extract the relevant columns for the counties
county = census[list_out]
results={} # dictionary stores results from sum of population of three most populous counties for each state
for state in set(county['STNAME']): # iterate over each STNAME
temp=census[census['STNAME']==state] #extract all observations for that STNAME
# sort the counties in each state by population values and sum up the three highest populations
results[state]=temp.sort_values(by='CENSUS2010POP', ascending=False)[:3]['CENSUS2010POP'].sum()
# convert dictionary containing STNAME and population values, into dataframe
z=pd.DataFrame([results])
# sort this dataframe by decreasing order of state population and extract 3 most populous states
z_1=z.T.sort_values(by=0, ascending=False)[:3]
# extract names of state from the index of the dataframe
z_1['names']=z_1.index
return list(z_1['names']) # return a list
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Which county has had the largest absolute change in population within the period 2010-2015? (Hint: population values are stored in columns POPESTIMATE2010 through POPESTIMATE2015, you need to consider all six columns.)e.g. If County Population in the 5 year period is 100, 120, 80, 105, 100, 130, then its largest change in the period would be |130-80| = 50.*This function should return a single string value.*
###Code
def answer_seven():
# extract a dataframe of only county level observations
census=census_df[census_df['SUMLEV']==50]
# list of relevant column names
list_out=['CTYNAME','POPESTIMATE2010', 'POPESTIMATE2011', 'POPESTIMATE2012', 'POPESTIMATE2013', 'POPESTIMATE2014', 'POPESTIMATE2015' ]
# extract column information for the counties
county = census[list_out]
# transpose the dataframe to perform min, max operation (on columns)
county_t=county.T.drop('CTYNAME')
results={} # dictionary to store the county-wise largest absolute change in population within the period 2010-2015
for num in county.index: # interate over all counties
results[num]=county_t[num].max()-county_t[num].min()
# store the difference for each county
#add the difference results to the dataframe
county_t=county_t.append(results, ignore_index=True)
z=county_t.T # transpose the appended dataframe
# find the index of observation with largest difference as defined above
i=z[z[6]==z[6].max()].index[0]
# find the county name corresponding to the above index
return county.loc[i]['CTYNAME']
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8In this datafile, the United States is broken up into four regions using the "REGION" column. Create a query that finds the counties that belong to regions 1 or 2, whose name starts with 'Washington', and whose POPESTIMATE2015 was greater than their POPESTIMATE 2014.*This function should return a 5x2 DataFrame with the columns = ['STNAME', 'CTYNAME'] and the same index ID as the census_df (sorted ascending by index).*
###Code
def answer_eight():
# extract a dataframe that satisfies the conditions mentioned
out= census_df[((census_df['REGION']==1) | (census_df['REGION']==2)) & (census_df['CTYNAME'].str.startswith('Washington')) & (census_df['POPESTIMATE2015']> census_df['POPESTIMATE2014']) & (census_df['SUMLEV']==50)]
# list of relevant columns
list_out=['STNAME','CTYNAME']
# return the columns from the extracted dataframe
return out[list_out]
answer_eight()
###Output
_____no_output_____ |
Sales_SuperStore_EDA.ipynb | ###Markdown
Importing required Libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.font_manager
plt.rcParams.update({'font.family':'sans-serif','font.size':14})
###Output
_____no_output_____
###Markdown
Importing Data and Preprocessing
###Code
df = pd.read_csv('SampleSuperstore.csv')
df.head()
df.info()
df.isnull().sum()
## Null values not available in the dataset
df.duplicated(keep=False).sum()
## 14 duplicate rows present but since there is no unique ID, considering them as differnent purchases
df['Postal Code'] = df['Postal Code'].astype('object')
###Output
_____no_output_____
###Markdown
Descriptive Statistics and Correlation
###Code
df.describe().transpose()
df.describe(include=object).transpose()
## Country columns has only one unique value - United States.
## Hence, either the shop is only in United States (or) only US data is available to us
## The united States the data covers all regions, major cities and pincodes
## 17 sub-category of products across 3 main categories are in the data
## Purchases are processed in 4 different type of shipping mode
corre = df.corr()
corre
## Sales have moderate positive correaltion with profit
## Sales and profit have negative correlatio with discount
plt.figure(figsize=(12,12))
sns.pairplot(df.drop(columns='Postal Code'),diag_kind='kde')
plt.show()
###Output
_____no_output_____
###Markdown
Bivariate and Multi Variate Analysis Analysis based on Quantity sold
###Code
quant_cat = df.groupby(['Category']).Quantity.sum().sort_values(ascending=False)
sns.barplot(x=quant_cat,y=quant_cat.index)
plt.show()
## Office supplies has more 3x count of sales compared to others two categories individually
quant_cat_sub = df.groupby(['Category','Sub-Category']).Quantity.sum().reset_index().sort_values(by=['Category','Quantity'], ascending=[True,False], key=lambda x : x.replace({'Office Supplies':1,'Furniture':2,'Technology':3}))
plt.figure(figsize=(12,8))
sns.barplot(data=quant_cat_sub,y='Sub-Category',x='Quantity',hue='Category')
plt.title('Sub-Category wise Quantity sold',fontdict={'fontsize':22,'family':'sans serif','color':'darkred'})
plt.xlabel('Total Quantity',fontdict={'fontsize':18,'family':'sans serif','color':'darkred'})
plt.ylabel('Sub-Category',fontdict={'fontsize':18,'family':'sans serif','color':'darkred'})
plt.show()
## From the above graph, we can see the quantity of items sold in each sub category
## Top 2 sub categories as per quantity sold
## 1) Office Supplies - Binders, Paper
## 2) Furniture - Furnishings, Chairs
## 3) Technology - Phones, Accessories
###Output
_____no_output_____
###Markdown
Discount !!!!
###Code
df_discount = df[df.Discount != 0]
disc_cat = df_discount.groupby('Category')['Discount'].mean().sort_values(ascending=False)
sns.barplot(x=disc_cat,y=disc_cat.index)
plt.show()
## Average discount % is high for Office Supplies products
plt.figure(figsize=(16,8))
ax1, ax2 = plt.subplot(1,2,1), plt.subplot(1,2,2)
sns.violinplot(data=df,x='Discount',y='Category',inner='box',ax=ax1)
ax1.set_title('For all the items')
sns.violinplot(data=df_discount,x='Discount',y='Category',inner='box',ax=ax2,sharex=ax1)
ax2.set_title('Only for the discounted items')
ax2.set_yticks([])
ax2.set_ylabel('')
plt.show()
## --> for all the categories
## 1) Purchases with 0 % discount are high
## 2) and for the discounted purchases, median discount is 20%
## --> For office supplies, we can see distribution at around 80% which is the max
## --> For technology, thers are some items discounted at around 40%
## --> Furnitures have a wide range of discounts available
print(df.iloc[df_discount.Discount.idxmax()][['Segment','Category','Sub-Category','Quantity','Discount','Profit']])
## The max discount of 80% was provided for a 5 items Office Supplies purchase which resulted in a loss of $123
###Output
_____no_output_____
###Markdown
Function Definition for plotting sales and profit against other features
###Code
def sales_profit(col,filter_col=None,filter_condition=None,n=5):
if (filter_col == None) | (filter_condition == None):
new_df = df
a = ''
else:
new_df = df[df[filter_col]==filter_condition]
a = 'In {} as {},'.format(filter_col,filter_condition)
print('Sales Contribution by {}\n'.format(col))
print(new_df[col].value_counts().head(10))
fig = plt.figure(figsize=(16, 12))
gs = fig.add_gridspec(nrows = 2, ncols = 4, width_ratios=(2, 2, 2, 2), height_ratios=(2, 2),
left=0.1, right=0.9, bottom=0.1, top=0.9, wspace=0.8, hspace=0.2)
font_title ={'fontsize':20,'family': 'sans-serif','color': 'darkred','weight': 'normal','size': 20}
ax_pie = fig.add_subplot(gs[0, 1:3])
sales = (new_df.groupby(col).Sales.sum().sort_values(ascending=False)*100/new_df.Sales.sum())
if len(sales) > 10:
sns.barplot(x=sales.head(15),y=sales.head(15).index,ax=ax_pie)
ax_pie.set_title('{} \nTop 15 Revenue contributing {}'.format(a,col),fontdict=font_title)
elif len(sales) > 5:
ax_pie.pie(sales,autopct='%.2f%%',textprops={'fontsize':16,'color':'white'})
ax_pie.set_title('{} \nRevenue contribution by {}'.format(a,col),fontdict=font_title)
ax_pie.legend(bbox_to_anchor=(1,1))
else:
ax_pie.pie(sales,autopct='%.2f%%',labels=sales.index,textprops={'fontsize':16,'color':'white'})
ax_pie.set_title('{} \nRevenue contribution by {}'.format(a,col),fontdict=font_title)
ax_pie.legend(bbox_to_anchor=(1,1))
ax_sales = fig.add_subplot(gs[1,:2])
avg_sales = (new_df.groupby(col).Sales.mean().sort_values(ascending=False)).head(5)
sns.barplot(x=avg_sales.values,y=avg_sales.index)
ax_sales.set_title('Average Revenue per purchase by {}'.format(col),fontdict=font_title)
ax_sales.set_xlabel('Revenue')
ax_profit = fig.add_subplot(gs[1,2:])
avg_profit = (new_df.groupby(col).Profit.mean().sort_values(ascending=False)).head(5)
sns.barplot(x=avg_profit.values,y=avg_profit.index)
ax_profit.set_title('Average Profit per purchase by {}'.format(col),fontdict=font_title)
ax_profit.set_xlabel('Profit')
ax_profit.set_ylabel('')
plt.show()
###Output
_____no_output_____
###Markdown
Which region of US yields least sales and profit and why?
###Code
sales_profit(col='Region')
## Inference:
## 1) West and East regions contribute to 60% of the revenue
## 2) Central regions of US produce significantly low average profit per purchase. Further research needs to be done
sales_profit(col='Category',filter_col='Region',filter_condition='Central')
## On further analysis, it is found that furniture business in Central US,
## Furniture business contributing for 33% of the market share is in loss leading to significant drop in profit
###Output
_____no_output_____
###Markdown
State wise Analysis
###Code
tot_sales_state = (df.groupby('State').Sales.sum().sort_values(ascending=False))
plt.figure(figsize=(12,4))
sns.barplot(x=tot_sales_state.head(),y=tot_sales_state.head().index)
plt.title('Top 5 Revenue contributing state')
plt.show()
###Output
_____no_output_____
###Markdown
Which city yields high sales and profit?
###Code
tot_sales_city = (df.groupby('City').Sales.sum().sort_values(ascending=False))
plt.figure(figsize=(12,4))
sns.barplot(x=tot_sales_city.head(10),y=tot_sales_city.head(10).index)
plt.title('Top 10 Revenue contributing cities')
plt.savefig('top_rev_city.png')
plt.show()
sales_city = (df.groupby('City').Sales.mean().sort_values(ascending=False))
profit_city = (df.groupby('City').Profit.mean().sort_values(ascending=False))
fig = plt.figure(figsize=(16, 4))
gs = fig.add_gridspec(nrows = 1, ncols = 2, width_ratios=(2, 2), left=0.05, right=0.95, wspace=0.28)
ax_rev = fig.add_subplot(gs[0, 0])
sns.barplot(x=sales_city.head(10),y=sales_city.head(10).index,ax=ax_rev)
ax_rev.set_title('Avg Revene per purchase by City')
ax_pro = fig.add_subplot(gs[0, 1])
sns.barplot(x=profit_city.head(10),y=profit_city.head(10).index,ax=ax_pro)
ax_pro.set_title('Avg Profit per purchase by City')
ax_pro.set_ylabel('')
plt.show()
## Inference :
## 1) Major revenue generating cities are New York, Los Angeles, Seattle, San Francisco and Phildelphia
## 2) Jameston, Independence, Appleton, Burbank and Beverly seems to be a good market to target
###Output
_____no_output_____
###Markdown
Does ship mode has relation with sales and profit?
###Code
sales_profit('Ship Mode')
## Inference :
## 1) Standard Class is the dominant Shipping mode of Sales but has the least contribution to average revenue and profit per purchase
## 2) Same day is the minority of the Shipping modes accounting for 5% but has the highest average revenue per purchase
## 3) First Class shipping mode has the maximum average profit per purchase
###Output
_____no_output_____
###Markdown
Which segment provides high sales and profit?
###Code
sales_profit('Segment')
## Inference :
## 1) We have more sales in 'Consumer' segment but with least average revenue and profit per purchase compared
## 2) 'Home office' has the highest average revenue and profit per purchase
###Output
_____no_output_____
###Markdown
Which category yields high sales and profit?
###Code
sales_profit('Category')
## Inference :
## 1) Revenue contribution is almost equally distributed between the categories
## 2) Technology leads with higher avg sales and profit per purchase
## 3) Furniture category purchases yields the least avg profit
###Output
_____no_output_____
###Markdown
In Technology, Which sub-category yields high sales and profit?
###Code
sales_profit(col='Sub-Category',filter_col='Category',filter_condition='Technology')
## Inference :
## 1) Phones are the frequently purchased technology in the store
## 2) Surprisingly, Selling a unit of copier gives almost 10 times more profit than selling a unit product from any other
## sub category in Technology
###Output
_____no_output_____ |
AV_DHS_2017/Modeling.ipynb | ###Markdown
ModelingHere we will have some sample codes and links with respect to modeling section. Modeling Bigger Datasets 1. [FTRL Implementation](https://www.kaggle.com/jiweiliu/ftrl-starter-code/code)2. [LibFFM](https://github.com/guestwalk/libffm)3. [Voapal Wabbit](https://github.com/JohnLangford/vowpal_wabbit/wiki)4. [Incremental Learning](http://scikit-learn.org/stable/modules/scaling_strategies.htmlincremental-learning) Time Series Forecasting1. [R Tutorial](https://www.analyticsvidhya.com/blog/2015/12/complete-tutorial-time-series-modeling/)2. [Python Tutorial](https://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/) Bayesian OptimizationSome python libraries are1. [Hyperopt](http://hyperopt.github.io/hyperopt/)2. [Spearmint](https://github.com/JasperSnoek/spearmint)3. [Bayesian Optimization](https://github.com/fmfn/BayesianOptimization) Example code can be seen in this [Kaggle Kernel](https://www.kaggle.com/dreeux/hyperparameter-tuning-using-hyperopt) Random Forest
###Code
def runRF(train_X, train_y, test_X, test_y=None, test_X2=None, depth=20, leaf=10, feat=0.2):
model = ensemble.RandomForestClassifier(
n_estimators = 1000,
max_depth = depth,
min_samples_split = 2,
min_samples_leaf = leaf,
max_features = feat,
n_jobs = 4,
random_state = 0)
model.fit(train_X, train_y)
train_preds = model.predict_proba(train_X)[:,1]
test_preds = model.predict_proba(test_X)[:,1]
test_preds2 = model.predict_proba(test_X2)[:,1]
test_loss = 0
train_loss = metrics.log_loss(train_y, train_preds)
test_loss = metrics.log_loss(test_y, test_preds)
print "Train and Test loss : ", train_loss, test_loss
return test_preds, test_loss, test_preds2
###Output
_____no_output_____
###Markdown
XGBoost / Light GBM
###Code
def runXGB(train_X, train_y, test_X, test_y=None, test_X2=None, seed_val=0, rounds=500, dep=8, eta=0.05):
params = {}
params["objective"] = "binary:logistic"
params['eval_metric'] = 'auc'
params["eta"] = eta
params["subsample"] = 0.7
params["min_child_weight"] = 1
params["colsample_bytree"] = 0.7
params["max_depth"] = dep
params["silent"] = 1
params["seed"] = seed_val
#params["max_delta_step"] = 2
#params["gamma"] = 0.5
num_rounds = rounds
plst = list(params.items())
xgtrain = xgb.DMatrix(train_X, label=train_y)
xgtest = xgb.DMatrix(test_X, label=test_y)
watchlist = [ (xgtrain,'train'), (xgtest, 'test') ]
model = xgb.train(plst, xgtrain, num_rounds, watchlist, early_stopping_rounds=100, verbose_eval=20)
pred_test_y = model.predict(xgtest, ntree_limit=model.best_ntree_limit)
pred_test_y2 = model.predict(xgb.DMatrix(test_X2), ntree_limit=model.best_ntree_limit)
loss = metrics.roc_auc_score(test_y, pred_test_y)
return pred_test_y, loss, pred_test_y2
###Output
_____no_output_____
###Markdown
Neural Networks / Deep Learning
###Code
def runNN(train_X, train_y, test_X, test_y=None, test_X2=None, epochs=100, scale=False):
if scale:
sc = preprocessing.StandardScaler()
all_X = pd.concat([train_X, test_X, test_X2], axis=0)
sc.fit(all_X)
train_X = sc.transform(train_X)
test_X = sc.transform(test_X)
test_X2 = sc.transform(test_X2)
random.seed(12345)
np.random.seed(12345)
model = Sequential()
model.add(Dense(200, input_shape=(train_X.shape[1],), init='he_uniform')) #, W_regularizer=regularizers.l1(0.002)))
model.add(Activation('relu'))
model.add(Dropout(0.3))
#model.add(Dense(50, init='he_uniform'))
#model.add(Activation('relu'))
#model.add(Dropout(0.3))
#model.add(Dense(100, init='he_uniform'))
#model.add(Activation('relu'))
#model.add(Dropout(0.3))
model.add(Dense(1, init='he_uniform'))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adagrad')
### Model fitting takes place ###
model.fit(train_X, train_y, batch_size=512, nb_epoch=epochs, validation_data=(test_X, test_y), verbose=2, shuffle=True)
preds = model.predict(test_X, verbose=0)
preds_test2 = model.predict(test_X2, verbose=0)
loss = metrics.log_loss(test_y, preds)
return preds.ravel(), loss, preds_test2.ravel()
###Output
_____no_output_____ |
Experiment1_analysis_champion_levels/convex_optimization_in_python.ipynb | ###Markdown
Imports
###Code
#Omid55
from cvxpy import *
import numpy as np
from numpy import linalg as LA
import pandas as pd
import math
def clean_it(dataset):
TEAM_SIZE = 5
# remove those teams with 3 members
print(dataset.shape)
dataset = np.delete(dataset,np.where(dataset == -1)[0],axis=0)
print(dataset.shape)
# remove identical matches
dataset = unique_rows(dataset)
print(dataset.shape)
# remove identical teams
if dataset.shape[1] <= 2*TEAM_SIZE + 1:
# just champion levels
dataset = np.delete(dataset, np.where(np.sum(dataset[:,:TEAM_SIZE] - dataset[:,TEAM_SIZE:2*TEAM_SIZE],axis=1)==0),axis=0)
else:
# champion levels and members
dataset = np.delete(dataset, np.where(np.sum(dataset[:,:TEAM_SIZE] - dataset[:,TEAM_SIZE:2*TEAM_SIZE] +
dataset[:,2*TEAM_SIZE:3*TEAM_SIZE] - dataset[:,3*TEAM_SIZE:-1], axis=1) == 0), axis=0)
print(dataset.shape)
return dataset
def unique_rows(A, return_index=False, return_inverse=False):
"""
Similar to MATLAB's unique(A, 'rows'), this returns B, I, J
where B is the unique rows of A and I and J satisfy
A = B[J,:] and B = A[I,:]
Returns I if return_index is True
Returns J if return_inverse is True
"""
A = np.require(A, requirements='C')
assert A.ndim == 2, "array must be 2-dim'l"
B = np.unique(A.view([('', A.dtype)]*A.shape[1]),
return_index=return_index,
return_inverse=return_inverse)
if return_index or return_inverse:
return (B[0].view(A.dtype).reshape((-1, A.shape[1]), order='C'),) \
+ B[1:]
else:
return B.view(A.dtype).reshape((-1, A.shape[1]), order='C')
def compute_difference_of_winners_from_losers(matches):
diff = np.zeros(len(matches))
for i, match in enumerate(matches):
loser = match[:TEAM_SIZE]
winner = match[TEAM_SIZE:]
diff[i] = LA.norm(winner,2) - LA.norm(loser,2)
print('positive percentage: ', 100*float(len(np.where(diff > 0)[0]))/len(diff))
print('zero percentage: ', 100*float(len(np.where(diff == 0)[0]))/len(diff))
print('negative percentage: ', float(100*len(np.where(diff < 0)[0]))/len(diff))
return diff
###Output
_____no_output_____
###Markdown
Body
###Code
matches = np.genfromtxt('matches_reduced.csv', delimiter=',', skip_header=True)
TEAM_SIZE = 5
matches = clean_it(matches)
'''
diff = compute_difference_of_winners_from_losers(matches)
winner_minus_loser = matches[:,TEAM_SIZE:] - matches[:,:TEAM_SIZE]
#winner_minus_loser = np.delete(winner_minus_loser, np.where(np.sum(winner_minus_loser, axis=1) == 0), axis=0)
diff, index = unique_rows(diff, return_index=True)
sorted_diff = np.sort(diff)
c = np.where(sorted_diff == 0)[0]
center_index = c[math.floor(len(c)/2)]
LEN = len(np.where(diff<0)[0])
beg_diff = sorted_diff[center_index-LEN]
end_diff = sorted_diff[center_index+LEN]
sampled_indices = np.where((diff >= beg_diff) * (end_diff >= diff))[0]
print(len(sampled_indices), 'number of close matches is sampled.')
close_matches = matches[sampled_indices]
print('\n')
compute_difference_of_winners_from_losers(close_matches)
np.savetxt(
'close_matches.csv',
close_matches,
fmt='%d',
delimiter=',',
newline='\n', # new line character
footer='', # file footer
comments='', # character to use for comments
header='team1_member1,team1_member2,team1_member3,team1_member4,team1_member5,'
+ 'team2_member1,team2_member2,team2_member3,team2_member4,team2_member5'
)
'''
def save_it_for_optimizer(matches, name, LEN):
X = matches[:,TEAM_SIZE:] - matches[:,:TEAM_SIZE] #diff = winner - loser
print(X.shape)
X = np.delete(X, np.where(np.sum(X, axis=1) == 0), axis=0)
print(X.shape)
X = unique_rows(X)
print(X.shape)
if LEN > 0:
idx = np.random.choice(len(X), LEN, replace=False)
print('before samples:', diff.shape[0])
X = X[idx,:]
print('now samples:', X.shape[0])
np.savetxt(
name,
X,
fmt='%d',
delimiter=',',
newline='\n', # new line character
footer='', # file footer
comments='', # character to use for comments
header='winner1_loser1,winner2_loser2,winner3_loser3,winner4_loser4,winner5_loser5'
)
print('Positive: ', 100*len(np.where(np.sum(X, axis=1)>0)[0]) / len(X))
print('Zero: ', 100*len(np.where(np.sum(X, axis=1)==0)[0]) / len(X))
print('Negative: ', 100*len(np.where(np.sum(X, axis=1)<0)[0]) / len(X))
X = save_it_for_optimizer(matches, 'winner_minus_loser.csv', -1)
#save_it_for_optimizer(close_matches, 'all_winner_minus_loser_close.csv', -1)
###Output
_____no_output_____ |
notebooks/ship_stock_test.ipynb | ###Markdown
Data provider and transformer
###Code
from neo_finrl.data_processors.processor_yahoofinance import YahooFinanceProcessor
data_downloader = YahooFinanceProcessor()
###Output
_____no_output_____
###Markdown
Data Extraction
###Code
stock_history_df = data_downloader.download_data(start_date, end_date, tic_list['tic'], '1D')
if history_df_name != None:
stock_history_df.to_csv(history_df_name, index = False)
# simple hack for currency
for col_i in ['open', 'high', 'low', 'close', 'adjcp']:
stock_history_df.loc[stock_history_df.tic.str.endswith('.SI'), col_i] = \
stock_history_df.loc[stock_history_df.tic.str.endswith('.SI'), col_i]/1.3
stock_history_df.loc[stock_history_df.tic.str.endswith('.HK'), col_i] = \
stock_history_df.loc[stock_history_df.tic.str.endswith('.HK'), col_i]/7.8
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
stock_data_df = data_downloader.clean_data(stock_history_df)
stock_data_df = data_downloader.add_technical_indicator(stock_data_df, tech_indicators)
stock_data_df = data_downloader.add_turbulence(stock_data_df)
stock_data_df.to_csv('cleaned_stock.csv', index = False)
###Output
_____no_output_____
###Markdown
Create env
###Code
stock_data_df = pd.read_csv('cleaned_stock.csv')
price_array, tech_array, risk_array = data_downloader.df_to_array_fix(stock_data_df, \
tech_indicator_list= tech_indicators, \
if_vix = False)
import numpy as np
from neo_finrl.env_stock_trading.env_stock_trading import StockTradingEnv
config = dict()
config['price_array'] = price_array[:train_test_split_index]
config['tech_array'] = tech_array[:train_test_split_index]
config['risk_array'] = risk_array[:train_test_split_index]
config['if_train'] = True
initial_account = 1e5
# set high threshold to avoid whole sell
risk_thresh = np.nanmax(risk_array) + 1
config['price_array'].shape, config['tech_array'].shape, config['risk_array'].shape
stock_env = StockTradingEnv(config, \
initial_account=initial_account, \
risk_thresh=risk_thresh)
###Output
_____no_output_____
###Markdown
Test RL
###Code
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv
env_train = DummyVecEnv([lambda : stock_env])
model = PPO("MlpPolicy", env_train, learning_rate=0.00025,
n_steps=2048, batch_size=128, ent_coef=0.0,
gamma=0.99, seed=312)
model.learn(total_timesteps=1e4, tb_log_name = 'ppo')
print('Training finished!')
model.save(cwd)
print('Trained model saved in ' + str(cwd))
###Output
_____no_output_____
###Markdown
Backtesting
###Code
#test on the testing env
def testRun(model, env_instance):
state = env_instance.reset()
episode_returns = list() # the cumulative_return / initial_account
done = False
while not done:
action = model.predict(state)[0]
state, reward, done, _ = env_instance.step(action)
total_asset = env_instance.amount + (env_instance.price_ary[env_instance.day] * env_instance.stocks).sum()
episode_return = total_asset / env_instance.initial_total_asset
episode_returns.append(episode_return)
print('episode_return', episode_return)
print('Test Finished!')
return episode_returns
test_config = dict()
test_config['price_array'] = price_array[train_test_split_index:]
test_config['tech_array'] = tech_array[train_test_split_index:]
test_config['risk_array'] = risk_array[train_test_split_index:]
test_config['if_train'] = False
initial_account = 1e5
# set high threshold to avoid whole sell
risk_thresh = np.nanmax(risk_array) + 1
test_env = StockTradingEnv(test_config, \
initial_account=initial_account, \
risk_thresh=risk_thresh)
test_model = PPO.load(cwd)
cumulative_return = testRun(test_model, test_env)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(cumulative_return, label='agent return')
plt.grid()
plt.title('cumulative return')
plt.xlabel('time')
###Output
_____no_output_____ |
PhotoBlur.ipynb | ###Markdown
Code for blurring the subject's background(person's background) from the image & to write to a folder called as blur
###Code
path = 'selfies\\'
selfImgs = os.listdir(path)
for image in selfImgs:
print(image)
img = cv2.imread(path+image)
img=cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
blur = cv2.blur(img,(10,10))
plt.imshow(blur)
j=cv2.cvtColor(blur, cv2.COLOR_BGR2RGB)
cv2.imwrite('blurred\\'+image+".jpg",j)
dataset = coco.CocoDataset()
print(dataset.class_info)
###Output
_____no_output_____ |
coding-exercises/intro_coding.ipynb | ###Markdown
Introduction to codingThis is the most basic introduction we have to coding. We'll use bash as our example language, because the first programming you do will probably be in the terminal. The terminalHere's a picture of the terminal. It's your main programmatic interface with the computer.The group of programming languages that are generally used in the terminal are called "shell languages." Of those languages, bash is the most common, and that's the one we'll focus on here.Ignore the next cell. We're just setting something up to make navigating later cells easier.
###Code
%%bash
orig=`pwd`
###Output
_____no_output_____
###Markdown
CommentsLet's start with something simple. In all programming languages, there is a way to separate comments from code. In bash, you can use the character.
###Code
%%bash
echo This is code
# This is not
###Output
This is code
###Markdown
You might have noticed the %%bash at the top of the last cell. That is just a Jupyter shortcut that lets us use bash in our notebooks, instead of Python or R. Common commands manThe first command you should know is `man`, because it tells you what you need to know about other commands.`man` simply refers to the manual associated with a given command. If you know a command exists, but don't know what it does or how to use it, use `man`.NOTE: If a man page is too long for the terminal, it will allow you to scroll through it. If you want to quit the man page without scrolling through the whole thing, simply type `q` into the terminal.
###Code
%%bash
# For example, let's see what the command "echo" does
man echo
###Output
ECHO(1) BSD General Commands Manual ECHO(1)
NNAAMMEE
eecchhoo -- write arguments to the standard output
SSYYNNOOPPSSIISS
eecchhoo [--nn] [_s_t_r_i_n_g _._._.]
DDEESSCCRRIIPPTTIIOONN
The eecchhoo utility writes any specified operands, separated by single blank
(` ') characters and followed by a newline (`\n') character, to the stan-
dard output.
The following option is available:
--nn Do not print the trailing newline character. This may also be
achieved by appending `\c' to the end of the string, as is done by
iBCS2 compatible systems. Note that this option as well as the
effect of `\c' are implementation-defined in IEEE Std 1003.1-2001
(``POSIX.1'') as amended by Cor. 1-2002. Applications aiming for
maximum portability are strongly encouraged to use printf(1) to
suppress the newline character.
Some shells may provide a builtin eecchhoo command which is similar or iden-
tical to this utility. Most notably, the builtin eecchhoo in sh(1) does not
accept the --nn option. Consult the builtin(1) manual page.
EEXXIITT SSTTAATTUUSS
The eecchhoo utility exits 0 on success, and >0 if an error occurs.
SSEEEE AALLSSOO
builtin(1), csh(1), printf(1), sh(1)
SSTTAANNDDAARRDDSS
The eecchhoo utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as
amended by Cor. 1-2002.
BSD April 12, 2003 BSD
###Markdown
pwd`pwd` prints your current working directory (i.e., the folder you are in within the terminal)
###Code
%%bash
pwd
###Output
/Users/tsalo/Documents/nbc/Onboarding/coding-exercises
###Markdown
cd`cd` changes your directoryIn bash, there are some helpful characters to know that you'll commonly use with `cd`:- `~`: your home directory- `.`: the directory you're in- `..`: the parent directory (one up from whatever you're referencing)Let's test those out.
###Code
%%bash
pwd
# back out one directory
cd ..
pwd
# don't go anywhere
cd .
pwd
# go to your home directory
cd ~
pwd
# You should also know that there is a difference between
# relative paths (paths to folders from where you are)
# and absolute paths (paths to folders from the
# computer's root directory)
cd /Users/tsalo/Documents
pwd
# As long as you are in the above directory, these two
# commands are equivalent
cd notebooks/
pwd
cd /Users/tsalo/Documents/notebooks/
pwd
# But see what happens when you try to cd into a folder
# you're already in
cd notebooks/
###Output
/Users/tsalo/Documents/nbc/Onboarding/coding-exercises
/Users/tsalo/Documents/nbc/Onboarding
/Users/tsalo/Documents/nbc/Onboarding
/Users/tsalo
/Users/tsalo/Documents
/Users/tsalo/Documents/notebooks
/Users/tsalo/Documents/notebooks
###Markdown
ls`ls` lists the contents of a folder. There are a lot of options associated with `ls`, although printing out the `man` page here would take up too much space, so we'll simply list a few of the more important ones.- `l`: list files/folders in long form (i.e., with extra information about the size of the files/folders and their owners- `a`: list _all_ files/folders. By default, `ls` does not show "hidden" files/folders, which are ones with names that start with a period (e.g., `.bashrc`)- `t`: list files/folders in order of when they were updated. By default, `ls` lists files/folders alphabetically.- `r`: reverse the order in while files/folders are listed. For example, in conjunction with `t`, you can list files/folders so that the most recent appear at the bottom, rather than the top.
###Code
%%bash
echo Here is the standard ls:
ls
echo
echo Here is ls -l:
ls -l
echo
echo Check out the more complicated ls -ltra:
ls -ltra
###Output
Here is the standard ls:
intro_coding.ipynb
mri_data.ipynb
python.ipynb
working_with_spreadsheets.ipynb
Here is ls -l:
total 392
-rw-r--r-- 1 tsalo AD\Domain Users 10508 Apr 24 11:17 intro_coding.ipynb
-rw-r--r-- 1 tsalo AD\Domain Users 166771 Apr 23 08:31 mri_data.ipynb
-rw-r--r-- 1 tsalo AD\Domain Users 13346 Apr 23 09:28 python.ipynb
-rw-r--r-- 1 tsalo AD\Domain Users 1057 Apr 21 18:37 working_with_spreadsheets.ipynb
Check out the more complicated ls -ltra:
total 392
-rw-r--r-- 1 tsalo AD\Domain Users 1057 Apr 21 18:37 working_with_spreadsheets.ipynb
drwxr-xr-x@ 7 tsalo AD\Domain Users 238 Apr 21 18:51 ..
-rw-r--r-- 1 tsalo AD\Domain Users 166771 Apr 23 08:31 mri_data.ipynb
-rw-r--r-- 1 tsalo AD\Domain Users 13346 Apr 23 09:28 python.ipynb
-rw-r--r-- 1 tsalo AD\Domain Users 10508 Apr 24 11:17 intro_coding.ipynb
drwxr-xr-x 6 tsalo AD\Domain Users 204 Apr 24 11:17 .ipynb_checkpoints
drwxr-xr-x 7 tsalo AD\Domain Users 238 Apr 24 11:17 .
|
exam/2019-exam-with-answers.ipynb | ###Markdown
Exam pandas, with answers InstructionsFor this exam we use the dataset we explored in class about the given names of French babies over the period 1900-2019. The documentation about this dataset is online at https://www.insee.fr/fr/statistiques/2540004 and the dataset is downloaded at `../data/prenoms-fr-1900-2019.csv.zip`.For your convenience, this notebook is partially populated with code for loading and cleaning the dataset. A sample of the dataset is also displayed: **you have to focus on answering the questions.** Download the dataset
###Code
import requests
import os
def download(url, path):
"""Download file at url and save it locally at path"""
with requests.get(url, stream=True) as resp:
mode, data = 'wb', resp.content
if 'text/plain' in resp.headers['Content-Type']:
mode, data = 'wt', resp.text
with open(path, mode) as f:
f.write(data)
# Download the dataset if necessary
path = os.path.join('..', 'data', 'prenoms-fr-1900-2019.zip')
if not os.path.isfile(path):
os.makedirs(os.path.join('..', 'data'), exist_ok=True)
url = 'https://www.insee.fr/fr/statistiques/fichier/2540004/dpt2019_csv.zip'
download(url, path)
###Output
_____no_output_____
###Markdown
What you are expected to doExecute the cells of the notebook which are already populated before starting answering the questions, one by one. For answering each question you are provided one (or more) cells already prepared for you **to add your own code**. You will find some variables that you need to initialize. ⚠️ **ATTENTION** ⚠️ **ATTENTION** ⚠️ **ATTENTION** ⚠️When you are done answering your questions, please download the notebook file (extension `.ipynb`) to your personal computer and **send that file by e-mail to the address written in the whiteboard**. It is that notebook that we will use to give an score for your work. ---------------- Load the dataset
###Code
import pandas as pd
# Load the data. Its fields are separated by ';'.
# We ask pandas to interpret the columns 'annais' and 'dpt' as strings to avoid error with missing
# values
df = pd.read_csv(path, sep=';', dtype={'annais':str, 'dpt':str})
rows, cols = df.shape
print(f'This dataset contains {rows:,} rows and {cols} columns')
###Output
_____no_output_____
###Markdown
Clean the dataset
###Code
# Rename some columns to use more meaningful names
df = df.rename(columns={
'sexe': 'sex',
'preusuel': 'name',
'annais': 'year',
'dpt': 'department',
'nombre': 'count'})
# Drop rows with missing department and year and special '_PRENOMS_RARES'
df.drop(df[df['department'] == 'XX'].index, inplace=True)
df.drop(df[df['year'] == 'XXXX'].index, inplace=True)
df.drop(df[df['name'] == '_PRENOMS_RARES'].index, inplace=True)
# Convert column 'year' to numeric values
df['year'] = pd.to_numeric(df['year'])
###Output
_____no_output_____
###Markdown
Display a sample
###Code
df.sample(8)
###Output
_____no_output_____
###Markdown
Subset the dataframe for convenience
###Code
# In this dataset, the sex is represented as 1 for males and 2 for females
# For convenience, create two views of the dataframe: one for boys and one for girls
is_boy = df['sex'] == 1
is_girl = df['sex'] == 2
boys, girls = df[is_boy], df[is_girl]
boys.head(8)
girls.sample(8)
###Output
_____no_output_____
###Markdown
Questions 1 & 2:**1)** Determine the year when the largest number of girls named `'MARIE'` were born. How many girls were named `'MARIE'` that particular year?
###Code
# Group the 'MARIE' per year and for each group (i.e. each year) sum the column 'count' for all the departments
is_marie = girls['name'] == "MARIE"
maries_per_year = girls[is_marie].groupby(['year'])['count'].sum()
year = maries_per_year.idxmax()
count_maries = maries_per_year.max()
print(f'The year with largest number of girls named MARIE was {year}: there were {count_maries:,} of them')
###Output
_____no_output_____
###Markdown
**2)** What **percentage** of all the girls born that year were named `'MARIE'`?
###Code
# Count the total number of girls born in the year computed in the previous question
total_girls = girls[girls['year'] == year]['count'].sum()
# Compute the fraction of MARIEs over the total number of girls born that year
percent_maries = (count_maries * 100) / total_girls
print(f'{percent_maries:.0f}% of the girls born in {year} were named MARIE')
###Output
_____no_output_____
###Markdown
Questions 3 & 4:**3)** Determine the most popular name for boys and for girls for the whole period included in the dataset.
###Code
# Group the boys by name and sum the value of the column 'count' for all values of
# the column 'year' and 'department'. Then get the index of the maximum resulting value
# of that sum
top_boys = boys.groupby(['name'])['count'].sum().idxmax()
# Idem for girls
top_girls = girls.groupby(['name'])['count'].sum().idxmax()
print(f'The most popular names over the period 1900-2018 are {top_girls} and {top_boys}')
###Output
_____no_output_____
###Markdown
**4)** Determine the top most popular name for the girls who in 2019 are aged 20 years or less
###Code
# Girls aged 20 years or less in 2019 were born in 1999 or later
girls_20y_or_less = girls[girls['year'] >= 1999]
# Same method used in the previous question
top_girl_up_to_20years = girls_20y_or_less.groupby(['name'])['count'].sum().idxmax()
print(f'The most popular name for girls aged 20 years or less in 2019 is {top_girl_up_to_20years}')
###Output
_____no_output_____
###Markdown
Question 5:Answer `True` or `False` to the question below:*Among the girls born in 1970, were there more named `"ISABELLE"` than `"BRIGITTE"` ?*
###Code
# Select the girls born in 1970
girls_1970 = girls[girls['year'] == 1970]
# Group those girls by their given name, and for each group sum the values of the column 'count'
girls_per_name_1970 = girls_1970.groupby(['name'])['count'].sum()
# Select the rows for ISABELLE and BRIGITTE
isabelles_1970 = girls_per_name_1970.loc['ISABELLE']
brigittes_1970 = girls_per_name_1970.loc['BRIGITTE']
print(f'{isabelles_1970 > brigittes_1970}: in 1970 {isabelles_1970:,} girls were named ISABELLE and {brigittes_1970:,} were named BRIGITTE')
###Output
_____no_output_____ |
ex3/Multiclass Classification and Neural Networks.ipynb | ###Markdown
Multi-class Classification and Neural NetworksCopyright 2018 Wes BarnettLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.io as sio
from scipy.optimize import minimize
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 Multi-class classification
###Code
df = sio.loadmat('ex3data1.mat')
X = df['X']
y = df['y']
fig, ax = plt.subplots(10, 10, figsize=(15, 15))
for i, a in enumerate(ax.ravel()):
j = np.random.randint(0,X.shape[0])
a.tick_params(axis='both', which='both', bottom=False, top=False, labelbottom=False, right=False,
left=False, labelleft=False)
a.matshow(np.transpose(X[j,:].reshape(20,20)), cmap=plt.cm.Greys)
plt.tight_layout()
np.unique(y)
# Was originally designed for octave, such that 0 was labelled as 10
y[y == 10] = 0
np.unique(y)
###Output
_____no_output_____
###Markdown
$J(\theta) = \frac{1}{m}\sum_{i=1}^{m} \left [ -y^{(i)}\log{(h_{\theta}(x^{(i)}))} - (1 - y^{(i)}) \log{(1 - h_{\theta}(x^{(i)}))} \right ] + \frac{\lambda}{2m}\sum_{j=1}^{n}\theta_{j}^{2}$$\frac{\delta J(\theta)}{\delta \theta_{j}} = \frac{1}{m} \sum^{m}_{i=1} \left ( h_{\theta} (x^{(i)} - y^{(i)} \right )x_{j}^{(i)} + \frac{\lambda}{m}\theta_{j}$Vectorized version$\frac{\delta J(\theta)}{\delta \theta_{j}} = \frac{1}{m} X^{T} ( X\theta - y) + \frac{\lambda}{m}\theta_{j}$
###Code
def sigmoid(z):
return 1. / (1. + np.exp(-z))
def lrCostFunction(theta, X, y, l):
h = sigmoid(np.matmul(X,theta))
J = ( np.matmul(-np.transpose(y) , np.log(h)) - np.matmul(np.transpose(1-y), np.log(1-h)) + sum(l*0.5*theta[1:]**2) ) / len(y)
return J.ravel()
def lrGradFunction(theta, X, y, l):
h = sigmoid(np.matmul(X,theta)).reshape(-1,1)
grad = np.matmul(np.transpose(X), (h -y))
grad[1:] += l*theta[1:].reshape(-1,1)
grad /= len(y)
return grad.ravel()
# Test case
# Expected cost 2.534819
theta_t = np.array([-2., -1., 1., 2.]).reshape(-1,1)
X_t = np.hstack( (np.ones((5,1)), np.linspace(0.1,1.5,15).reshape(3,5).transpose()) )
y_t = np.array([1,0,1,0,1]).reshape(-1,1)
l_t = 3.
lrCostFunction(theta_t, X_t, y_t, l_t)
# Expected gradient: 0.146561 -0.548558 0.724722 1.398003
lrGradFunction(theta_t, X_t, y_t, l_t)
minimize_result = minimize(lrCostFunction, theta_t, method="CG", jac=lrGradFunction,
args=(X_t, (y_t == 9).astype(int), l_t), options={"maxiter": 400})
minimize_result.x
def oneVsAll(X, y, num_labels, l):
X = np.hstack((np.ones((len(y),1)), X))
all_theta = np.zeros((num_labels, X.shape[1]))
for k in range(num_labels):
initial_theta = np.zeros((X.shape[1],1))
minimize_result = minimize(lrCostFunction, initial_theta, method="CG", jac=lrGradFunction,
args=(X, (y == k).astype(int), l), options={"maxiter": 500})
all_theta[k,:] = minimize_result.x
return all_theta
l = 0.1
all_theta = oneVsAll(X, y, 10, l)
def predictOneVsAll(all_theta, X):
X = np.hstack((np.ones((len(y),1)), X))
pred = np.zeros(X.shape[0]).reshape(-1,1)
for i in range(X.shape[0]):
pred[i] = np.argmax( sigmoid( np.matmul( X[i,:], np.transpose(all_theta)) ) )
return pred
pred = predictOneVsAll(all_theta, X)
print("Training accuracy: {0:.3f}".format(np.mean(pred == y)))
###Output
Training accuracy: 0.965
###Markdown
2 Multiclass with Scikit-learn
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn.model_selection import GridSearchCV
df = sio.loadmat('ex3data1.mat')
X = df['X']
y = pd.get_dummies(df['y'].ravel())
y[0] = y[10]
y.drop(10, axis=1, inplace=True)
y.head()
# TODO: adjust regularization via grid search
lr = OneVsRestClassifier(LogisticRegression(C=100, tol=1e-6))
lr.fit(X, y)
print("Training accuracy: {0:.3f}".format(lr.score(X,y)))
###Output
Training accuracy: 0.925
###Markdown
3 Neural networks (Previously trained model)
###Code
df = sio.loadmat('ex3weights.mat')
Theta1 = df['Theta1']
Theta2 = df['Theta2']
Theta1.shape
Theta2.shape
input_layer_size = 400
hidden_layer_size = 25
num_labels = 10
df = sio.loadmat('ex3data1.mat')
X = df['X']
y = df['y']
# Was originally designed for octave, such that 0 was labelled as 10
m = len(y)
###Output
_____no_output_____
###Markdown
3.1 Feedforward propogation and predctionWe have the training weights from a pre-trained model.Input layer$a^{(1)} = x$Hidden layer$z^{(2)} = \Theta^{(1)}a^{(1)}$$a^{(2)} = g(z^{(2)})$Output layer$z^{(3)} = \Theta^{(2)}a^{(2)}$$a^{(3)} = g(z^{(3)}) = h_{\theta}(x)$
###Code
def predict(Theta1, Theta2, X):
m = X.shape[0]
p = np.zeros((m, 1))
# Input layer
a1_0 = np.ones((m,1))
a1 = np.hstack((a1_0, X))
# Hidden layer
z2 = np.matmul(Theta1, a1.transpose())
a2_0 = np.ones((m,1))
a2 = np.hstack((a2_0, sigmoid(z2).transpose()))
# Output layer
z3 = np.matmul(Theta2, a2.transpose())
a3 = sigmoid(z3)
for i in range(m):
# Model was trained when labels had images with 0 labeled as 10
p[i] = np.argmax(a3[:,i])+1
return p
pred = predict(Theta1, Theta2, X)
print("Training set accuracy: {0:.3f}".format(np.mean(pred == y)))
###Output
Training set accuracy: 0.975
|
sdk/Faropt basic example notebook.ipynb | ###Markdown
FarOpt Basic Example NotebookThis notebook shows how you can work with the FarOpt SDK. Full documentation can be found here https://faropt.readthedocs.io/
###Code
import sys
!{sys.executable} -m pip install --upgrade faropt
# !rm -rf /home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.6/site-packages/faropt
###Output
_____no_output_____
###Markdown
Example OR tools script for vehicle routing:
###Code
!mkdir src
%%writefile src/main.py
"""Capacited Vehicles Routing Problem (CVRP)."""
# [START import]
from __future__ import print_function
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
# Use this to publish custom metrics!
from utils import *
# [END import]
# [START data_model]
def create_data_model():
"""Stores the data for the problem."""
data = {}
data['distance_matrix'] = [
[
0, 548, 776, 696, 582, 274, 502, 194, 308, 194, 536, 502, 388, 354,
468, 776, 662
],
[
548, 0, 684, 308, 194, 502, 730, 354, 696, 742, 1084, 594, 480, 674,
1016, 868, 1210
],
[
776, 684, 0, 992, 878, 502, 274, 810, 468, 742, 400, 1278, 1164,
1130, 788, 1552, 754
],
[
696, 308, 992, 0, 114, 650, 878, 502, 844, 890, 1232, 514, 628, 822,
1164, 560, 1358
],
[
582, 194, 878, 114, 0, 536, 764, 388, 730, 776, 1118, 400, 514, 708,
1050, 674, 1244
],
[
274, 502, 502, 650, 536, 0, 228, 308, 194, 240, 582, 776, 662, 628,
514, 1050, 708
],
[
502, 730, 274, 878, 764, 228, 0, 536, 194, 468, 354, 1004, 890, 856,
514, 1278, 480
],
[
194, 354, 810, 502, 388, 308, 536, 0, 342, 388, 730, 468, 354, 320,
662, 742, 856
],
[
308, 696, 468, 844, 730, 194, 194, 342, 0, 274, 388, 810, 696, 662,
320, 1084, 514
],
[
194, 742, 742, 890, 776, 240, 468, 388, 274, 0, 342, 536, 422, 388,
274, 810, 468
],
[
536, 1084, 400, 1232, 1118, 582, 354, 730, 388, 342, 0, 878, 764,
730, 388, 1152, 354
],
[
502, 594, 1278, 514, 400, 776, 1004, 468, 810, 536, 878, 0, 114,
308, 650, 274, 844
],
[
388, 480, 1164, 628, 514, 662, 890, 354, 696, 422, 764, 114, 0, 194,
536, 388, 730
],
[
354, 674, 1130, 822, 708, 628, 856, 320, 662, 388, 730, 308, 194, 0,
342, 422, 536
],
[
468, 1016, 788, 1164, 1050, 514, 514, 662, 320, 274, 388, 650, 536,
342, 0, 764, 194
],
[
776, 868, 1552, 560, 674, 1050, 1278, 742, 1084, 810, 1152, 274,
388, 422, 764, 0, 798
],
[
662, 1210, 754, 1358, 1244, 708, 480, 856, 514, 468, 354, 844, 730,
536, 194, 798, 0
],
]
# [START demands_capacities]
data['demands'] = [0, 1, 1, 2, 4, 2, 4, 8, 8, 1, 2, 1, 2, 4, 4, 8, 8]
data['vehicle_capacities'] = [15, 15, 15, 15]
# [END demands_capacities]
data['num_vehicles'] = 4
data['depot'] = 0
return data
# [END data_model]
# [START solution_printer]
def print_solution(data, manager, routing, solution):
"""Prints solution on console."""
total_distance = 0
total_load = 0
for vehicle_id in range(data['num_vehicles']):
index = routing.Start(vehicle_id)
plan_output = 'Route for vehicle {}:\n'.format(vehicle_id)
route_distance = 0
route_load = 0
while not routing.IsEnd(index):
node_index = manager.IndexToNode(index)
route_load += data['demands'][node_index]
plan_output += ' {0} Load({1}) -> '.format(node_index, route_load)
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(
previous_index, index, vehicle_id)
plan_output += ' {0} Load({1})\n'.format(manager.IndexToNode(index),
route_load)
plan_output += 'Distance of the route: {}m\n'.format(route_distance)
plan_output += 'Load of the route: {}\n'.format(route_load)
print(plan_output)
total_distance += route_distance
total_load += route_load
print('Total distance of all routes: {}m'.format(total_distance))
log_metric('total_distance',total_distance)
save('/tmp/main.py') # or some other saved output
print('Total load of all routes: {}'.format(total_load))
# [END solution_printer]
def main():
"""Solve the CVRP problem."""
# Instantiate the data problem.
# [START data]
data = create_data_model()
# [END data]
# Create the routing index manager.
# [START index_manager]
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# [END index_manager]
# Create Routing Model.
# [START routing_model]
routing = pywrapcp.RoutingModel(manager)
# [END routing_model]
# Create and register a transit callback.
# [START transit_callback]
def distance_callback(from_index, to_index):
"""Returns the distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
# [END transit_callback]
# Define cost of each arc.
# [START arc_cost]
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# [END arc_cost]
# Add Capacity constraint.
# [START capacity_constraint]
def demand_callback(from_index):
"""Returns the demand of the node."""
# Convert from routing variable Index to demands NodeIndex.
from_node = manager.IndexToNode(from_index)
return data['demands'][from_node]
demand_callback_index = routing.RegisterUnaryTransitCallback(
demand_callback)
routing.AddDimensionWithVehicleCapacity(
demand_callback_index,
0, # null capacity slack
data['vehicle_capacities'], # vehicle maximum capacities
True, # start cumul to zero
'Capacity')
# [END capacity_constraint]
# Setting first solution heuristic.
# [START parameters]
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
# [END parameters]
# Solve the problem.
# [START solve]
solution = routing.SolveWithParameters(search_parameters)
# [END solve]
# Print solution on console.
# [START print_solution]
print('printing solutions')
if solution:
print_solution(data, manager, routing, solution)
# [END print_solution]
main()
###Output
Overwriting src/main.py
###Markdown
Additional note on utils included in the back end:Notice this line in the `print_solution()` function above:```log_metric('total_distance',total_distance)```This automatically pushes the float/int value as a named metric to cloudwatch logs that you can view after the job completes, or during the job. Import and initialize a FarOpt object
###Code
from faropt import FarOpt
fo = FarOpt()
###Output
INFO:root:FarOpt backend is ready!
INFO:root:Async Bucket: faropt-s3asyncd0fda937-jk815bus3ob5
INFO:root:Bucket: faropt-s3bucketfbfa637e-jauzfgds5cug
INFO:root:Recipe Table: FaroptRecipeTable
INFO:root:Job table: FaroptJobTable
INFO:root:Lambda Opt function: faropt-lambdafunction2783057D4-4NK8UXFZPQBP
###Markdown
.... _you should see "INFO:root:FarOpt backend is ready!" if the back end is set up correctly_ Solve Vehicle routing using FarOpt Configure and submit the job _This packages the source code (main.py and any subdirectories and files)_
###Code
fo.configure('./src/')
###Output
INFO:root:Listing project files ...
INFO:root:Configured job!
###Markdown
_....this can be a project that includes multiple folders and files, but requires a main.py at the root level_ submit() Submits to Fargate, and is suitable for arbitrarily long running jobs
###Code
fo.submit()
###Output
INFO:root:Submitting job
INFO:root:Submitted job! id: 2020-12-09-18-57-01-ced83a10-1ab9-4dc6-a4cc-6eb3f4fc1941
###Markdown
You can check just the primary status of the job
###Code
fo.primary_status()
###Output
_____no_output_____
###Markdown
.. or wait for job to complete..._look for INFO:root:JOB COMPLETED!_
###Code
fo.wait()
###Output
PROVISIONING
PROVISIONING
PROVISIONING
PROVISIONING
PENDING
PENDING
PENDING
PENDING
PENDING
PENDING
PENDING
RUNNING
RUNNING
RUNNING
RUNNING
###Markdown
_You should see ... INFO:root:JOB COMPLETED!_ View detailed logs from the job you ran
###Code
fo.logs()
###Output
INFO:root:No running tasks. Checking completed tasks...
INFO:root:No running tasks. Checking completed tasks...
###Markdown
As you can see above, any output files that you save to /tmp will be automatically uploaded to the S3 bucket```Saving /tmp/output to s3:///path/``` Optionally Add this problem as a standard recipe_This is useful when you need to repeatedly run the same problem, perhaps with different data inputs_
###Code
fo.add_recipe(recipe_name='cvrp_problem_v126',maintainer='Lab126')
###Output
_____no_output_____
###Markdown
Each recipe is given a unique ID, and this can be used to run the recipe at any time
###Code
fo.get_recipe_id_from_description('cvrp_problem_v126')
###Output
_____no_output_____
###Markdown
Rerun this recipe at any time
###Code
fo.run_recipe(fo.get_recipe_id_from_description('cvrp_problem_v126'))
###Output
INFO:root:Downloading recipe...
INFO:root:Configured job!
INFO:root:Submitting job
INFO:root:Submitted job! id: 2020-12-09-19-18-58-3c4731c0-3dad-45ec-acdb-eae51c711ade
###Markdown
... Or directly run using the recipe ID
###Code
fo.run_recipe('e4191eda-b16e-4cea-80d1-abd5f80f75ad')
###Output
INFO:root:Downloading recipe...
INFO:root:Configured job!
INFO:root:Submitting job
INFO:root:Submitted job! id: 2020-12-09-19-18-28-2dd6801e-52ef-43aa-986b-1faede889d20
###Markdown
You can also run this job as a micro job (on AWS Lambda)Note the the libraries you can use are limited to ortools, pyomo and deap libraries with default solvers
###Code
fo.submit(micro=True)
###Output
INFO:root:Submitting job
INFO:root:Staging job
INFO:root:Staged job! id: 2020-12-09-19-19-47-ef63ad97-c34d-4c25-a2d5-cebad5881e7f
INFO:root:Look for s3://faropt-s3asyncd0fda937-jk815bus3ob5/staged/2020-12-09-19-19-47-ef63ad97-c34d-4c25-a2d5-cebad5881e7f/source.zip
INFO:root:By submitting a micro job, you are restricted to using ortools, pyomo and deap libraries for jobs that last up to 5 minutes
###Markdown
_...or copy the recipe ID into the run_recipe call_ List past jobs and recipes
###Code
fo.list_jobs()
fo.list_recipes()
###Output
recipeid:e4191eda-b16e-4cea-80d1-abd5f80f75ad | bucket:faropt-s3bucketfbfa637e-jauzfgds5cug | path:2020-12-09-18-57-01-ced83a10-1ab9-4dc6-a4cc-6eb3f4fc1941/source.zip | description:cvrp_problem_v126 | maintainer:Lab126
###Markdown
Run a project directly from S3 (requires a source.zip in S3, with the main.py at the root)
###Code
fo.run_s3_job(bucket='faropt-s3bucketfbfa637e-jauzfgds5cug',key='2020-12-09-18-57-01-ced83a10-1ab9-4dc6-a4cc-6eb3f4fc1941/source.zip')
###Output
INFO:root:Downloading source...
INFO:root:Configured job!
INFO:root:Submitting job
INFO:root:Submitted job! id: 2020-12-09-19-20-34-272c60c3-cfd9-4c7b-b657-868dfb5ae529
###Markdown
_You can also check Fargate service for running tasks!_
###Code
fo.wait()
fo.logs()
###Output
INFO:root:No running tasks. Checking completed tasks...
INFO:root:No running tasks. Checking completed tasks...
###Markdown
Getting metrics Using utils.py, we logged a metric called "total_distance" in while submitting the job. Let's get the publised values for this metric, for this job.
###Code
fo.get_metric_data(metric_name='total_distance')
###Output
_____no_output_____ |
activity3-2.ipynb | ###Markdown
Notebook by **Maxime Dion** For the QSciTech-QuantumBC virtual workshop on gate-based quantum computing Tutorial for Activity 3.2For this activity, make sure you can easily import your versions of `hamiltonian.py`, `pauli_string.py` and `mapping.py` that you have completed in the Activity 3.1 tutorial. You will also need your verions of `evaluator.py` and `solver.py`. Placing this notebook in the same `path` as these files is the easiest way to acheive this. At the end of this notebook, you should be in good position to complete these 2 additionnal files.The solution we suggest here is NOT mandatory. If you find ways to make it better and more efficient, go on and impress us! On the other hand, by completing all sections of this notebook you'll be able to :- Prepare a Quantum State based on a varitional form (circuit);- Measure qubits in the X, Y and Z basis;- Estimate expectation value of Pauli String on a quantum state;- Evaluate the expectation value of an Hamiltonian in the form of a Linear Combinaison of Pauli Strings;- Run a minimization algorithm on the energy expectation function to find the ground state of a Hamiltonian;- Dance to express your overwhelming sense of accomplishment**Important**When you modify and save a `*.py` file you need to re-import it so that your modifications can be taken into account when you re-execute a call. By adding the magic command `%autoreload` at the beginning of a cell, you make sure that the modifications you did to the `*.py` files are taken into account when you re-run a celll and that you can see the effect.If you encounter unusual results, restart the kernel and try again.**Note on numbering**When you ask a question in the Slack channel you can refer to the section name or the section number.To enable the section numbering, please make sure you install [nbextensions](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/install.html). It is available in the conda distribution. After installation it you need to enable the option 'Table of contents (2)'. Variational Quantum StatesEvery quantum circuit starts with all qubits in the the state $|0\rangle$. In order to prepare a quantum state $|\psi\rangle$ we need to prepare a `QuantumCircuit` that will modify the states of the qubits in order to get this specific state. The action of a circuit can always be represented as a unitiary operator.\begin{align} |\psi\rangle &= \hat{U} |0 \ldots 0\rangle\end{align}For a parametric state the `QuantumCircuit` and therefore the unitary $U$ will depend on some parameters that we wirte as $\boldsymbol{\theta}$.\begin{align} |\psi(\boldsymbol{\theta})\rangle &= \hat{U}(\boldsymbol{\theta}) |0 \ldots 0\rangle\end{align}We will see 2 ways to define Parametrized Quantum Circuits that represent Variational Quantum States. For the first method we only need the `QuantumCircuit` class from `qiskit.circuit`.
###Code
from qiskit.circuit import QuantumCircuit
###Output
_____no_output_____
###Markdown
Generating functionThe easiest way to generate a parametrized `QuantumCircuit` is to implement a function that takes parameters as arguments and returns a `QuantumCircuit`. Here is such a function that generates a 2 qubits QuantumCircuit.
###Code
def example_2qubits_2params_quantum_circuit(theta,phi):
qc = QuantumCircuit(2)
qc.ry(theta,0)
qc.rz(phi,0)
qc.cx(0,1)
return qc
###Output
_____no_output_____
###Markdown
To visualize this circuit we first need to call the generating function with dummy argument values for it to return a circuit. We can draw the circuit. The `'mpl'` option draws the circuit in a fancy way using `matplotlib`. If you are experiencing problems, you can remove this option.
###Code
varform_qc = example_2qubits_2params_quantum_circuit
qc = varform_qc(1,2)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Using qiskit parameterThe other way to generate a parametrized `QuantumCircuit` is to use the `Parameter` class in `qiskit`.
###Code
from qiskit.circuit import Parameter
###Output
_____no_output_____
###Markdown
Here is the same circuit as before done with this method.
###Code
a = Parameter('a')
b = Parameter('b')
varform_qc = QuantumCircuit(2)
varform_qc.ry(a,0)
varform_qc.rz(b,0)
varform_qc.cx(0,1)
###Output
_____no_output_____
###Markdown
Done this way the parametrized circuit can be drawn right away.
###Code
varform_qc.draw('mpl')
###Output
_____no_output_____
###Markdown
To see what are the parameters of a parametrized `QuantumCircuit` you can use
###Code
varform_qc.parameters
###Output
_____no_output_____
###Markdown
**Important** Beware that sometimes the parameters will not appear in the same order as you declared them!To assign values to the different parameters we need to use the `QuantumCircuit.assign_paremeters()` method. This methods takes a `dict` as an argument containing the `Parameter`s and their `value`s.
###Code
param_dict = {a : 1, b : 2}
qc = varform_qc.assign_parameters(param_dict)
qc.draw('mpl')
param_dict = {a : 3, b : 4}
qc = varform_qc.assign_parameters(param_dict)
qc.draw('mpl')
###Output
_____no_output_____
###Markdown
If you want to provide the parameter values as a `list` or a `np.array` you can build the `dict` directly. Just make sure that the order you use in `param_values` corresponds to the other of `varform_qc.parameters`.
###Code
param_values = [1, 2]
param_dict = dict(zip(varform_qc.parameters,param_values))
print(param_dict)
###Output
{Parameter(a): 1, Parameter(b): 2}
###Markdown
Varforms circuits for H2Using the method of your choice, prepare 2 different 4-qubit `QuantumCircuit`s. - The first should take 1 parameter to cover the real coefficients state sub space spanned by $|0101\rangle$ and $|1010\rangle$.- The second should take 3 parameters to cover the real coefficients state sub space spanned by $|0101\rangle$, $|0110\rangle$, $|1001\rangle$ and $|1010\rangle$.Revisit the presentation to find such circuits.
###Code
varform_4qubits_1param = QuantumCircuit(4)
a = Parameter('a')
"""
Your code here
"""
varform_4qubits_1param.ry(a,1)
varform_4qubits_1param.x(0)
varform_4qubits_1param.cx(1,0)
varform_4qubits_1param.cx(0,2)
varform_4qubits_1param.cx(1,3)
varform_4qubits_1param.draw('mpl')
varform_4qubits_3params = QuantumCircuit(4)
a = Parameter('a')
b = Parameter('b')
c = Parameter('c')
"""
Your code here
"""
varform_4qubits_3params.x(0)
varform_4qubits_3params.x(2)
varform_4qubits_3params.barrier(range(4))
varform_4qubits_3params.ry(a,1)
varform_4qubits_3params.cx(1,3)
varform_4qubits_3params.ry(b,1)
varform_4qubits_3params.ry(c,3)
varform_4qubits_3params.cx(3,2)
varform_4qubits_3params.cx(1,0)
varform_4qubits_3params.draw('mpl')
###Output
_____no_output_____
###Markdown
EvaluatorThe `Evaluator` is an object that will help us to evaluate the expectation value of a quantum operator (`LCPS`) on a specific variational form and backend. To initialize and `Evaluator` you should provide :**Mandatory**- A **variational form** that can create a `QuantumCircuit` given a set of `params`;- A **backend** `qiskit.Backend` (a simulator or an actual device handle) on which to run the `QuantumCircuit`**Optional**- `execute_opt` is a `dict` containing the optionnal argument to pass to the `qiskit.execute` method (ex : `{'shots' : 1024}`.- `measure_filter` a `qiskit.ignis...MeasurementFilter` that can be applied to the result of a circuit executation to mitigate readout errors. The creation/usage of an `Evaluator` such as `BasicEvaluator` goes like this :evaluator = BasicEvaluator(varform_qc,backend)evaluator.set_linear_combinaison_pauli_string(operator_lcps)expected_value = evaluator.eval(params)First you initialize the evaluator.Next, you provide the operator you want to evaluate using the `set_linear_combinaison_pauli_string(LCPS)` method. Finally, you call the `eval(params)` method that will return the estimation of the operator's expected value. Mathematicaly, the use of this method corresponds to \begin{align}E(\boldsymbol{\theta}).\end{align}We will now go through the different pieces neccessary to complete the `Evaluator` class. Static methodsBeing static, these method do not need an instance of a class to be used. They can be called directly from the class.These methods are called before the first call to `eval(params)`. Most of these methods are implemented inside the abstract class `Evaluator` (except for `prepare_measurement_circuits_and_interpreters(LCPS)`) Pauli Based MeasurementsWe have seen that even if a quantum computer can only measure qubits in the Z-basis, the X and Y-basis are accessible if we *rotate* the quantum state before measuring. Implement the `@staticmethod` : `pauli_string_based_measurement(PauliString)` in the `Evaluator` class in file `Evaluator.py` that returns a `QuantumCircuit` that measures each qubit in the basis given by the `PauliString`.First we import the abstract class `Evaluator` and the `PauliString` class.
###Code
from evaluator import Evaluator
from pauli_string import PauliString
###Output
_____no_output_____
###Markdown
Test your code with the next cell.
###Code
%autoreload
pauli_string = PauliString.from_str('ZIXY')
measure_qc = Evaluator.pauli_string_based_measurement(pauli_string)
measure_qc.draw('mpl')
###Output
_____no_output_____
###Markdown
Measurable eigenvaluesImplement the `@staticmethod` : `measurable_eigenvalues(PauliString)` in the `Evaluator` class in file `Evaluator.py` that returns a `np.array` that contains the eigenvalues of the measurable `PauliString` for each basis state. We noted this vector\begin{align} \Lambda_q^{\hat{(\mathcal{P})}}.\end{align}Be mindful of the order of the basis state.\begin{align} 0000, 0001, 0010, \ldots, 1110, 1111 \end{align}You can test your implementation on the `ZIXY` Pauli string.
###Code
%autoreload
pauli_string = PauliString.from_str('ZIXY')
measurable_eigenvalues = Evaluator.measurable_eigenvalues(pauli_string)
print(measurable_eigenvalues)
###Output
[ 1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1]
###Markdown
For the `PauliString` `'ZIXY'` (measurable `'ZIZZ'`) you should get the following eigenvalues :[ 1 -1 -1 1 1 -1 -1 1 -1 1 1 -1 -1 1 1 -1] Measurement Circuits and InterpretersThe `prepare_measurement_circuits_and_interpreters(LCPS)` is specific to the sub-type of `Evaluator`. The two different types of `Evaluator`s considered in this workshop are :- The `BasicEvaluator` will run a single `QuantumCircuit` for each `PauliString` present in the provided `LCPS`.- The `BitwiseCommutingCliqueEvaluator` will exploit Bitwise Commuting Clique to combine the evaluation of Commuting `PauliStrin`s and reduce the number of different `QuantumCircuit` run for each evaluation. Implement the `prepare_measurement_circuits_and_interpreters(LCPS)` method in the `BasicEvaluator` class in file `Evaluator.py`. This method should return 2 `list`. The first should contain one measurement `QuantumCircuit` for each `PauliString` in the `LCPS`. The second list should contain one `np.array` of the eigenvalues of the measurable `PauliString` for each basis state.**Note** You can try to implement similar methods for the `BitwiseCommutingCliqueEvaluator`.You can test your method on `2 ZIXY + 1 IXYZ`.
###Code
from evaluator import BasicEvaluator
%autoreload
lcps = 2*PauliString.from_str('ZIXY') + 1*PauliString.from_str('IXYZ')
measurement_circuits, interpreters = BasicEvaluator.prepare_measurement_circuits_and_interpreters(lcps)
###Output
_____no_output_____
###Markdown
You can visualize the interpreter and the measurement circuit for each term in the `LCPS` by using `i = 0` and `i = 1`.
###Code
i = 1
print(interpreters[i])
measurement_circuits[i].draw('mpl')
###Output
[ 1.+0.j -1.+0.j -1.+0.j 1.+0.j -1.+0.j 1.+0.j 1.+0.j -1.+0.j 1.+0.j
-1.+0.j -1.+0.j 1.+0.j -1.+0.j 1.+0.j 1.+0.j -1.+0.j]
###Markdown
The interpreters should be respectively :[ 2 -2 -2 2 2 -2 -2 2 -2 2 2 -2 -2 2 2 -2][ 1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 -1 1 1 -1] Set the LCPSThe method `set_linear_combinaison_pauli_string(LCPS)` is already implemented inside the abstract class `Evaluator`. Please take a look at it to notice that this method makes an immediate call to the `prepare_measurement_circuits_and_interpreters(LCPS)` method you have just implemented. The `measurement_circuits` and `interpreters` are also stored in attributes of the same name. Methods called inside `eval(params)`Since we are entering the action of the `eval(params)` method we will need to instantiate an `Evaluator`. This will require a `backend`. We will use a local `qasm_simulator` for now, which is part of the `Aer` module. In the future, you can use a different `backend`. We will also soon need the `execute` method.
###Code
from qiskit import Aer, execute
qasm_simulator = Aer.get_backend('qasm_simulator')
###Output
_____no_output_____
###Markdown
Circuit preparationThe `prepare_eval_circuits(params)` will combine the variational form with these measurement `QuantumCircuit`s to form the complete circuit to be run. This method has 2 tasks :- Assign the `params` to the variationnal form to get a `QuantumCircuit` that prepares the quantum state- Combine this circuit with all the measurement circuits to return as many `QuantumCircuit` inside a `list`.Implement this method inside the `Evaluator` class and test it here.
###Code
%autoreload
lcps = 2*PauliString.from_str('ZXZX') + 1*PauliString.from_str('IIZZ')
varform = varform_4qubits_1param
backend = qasm_simulator
evaluator = BasicEvaluator(varform,backend)
evaluator.set_linear_combinaison_pauli_string(lcps)
params = [0,]
eval_circuits = evaluator.prepare_eval_circuits(params)
###Output
_____no_output_____
###Markdown
You can take a look at the `QuantumCircuit` for the first (`i=0`) and second (`i=1`) PauliString. What you should get is a circuit that begins with the state preparation circuit with the `params` applied to it followed by the measurement circuit.
###Code
i = 0
eval_circuits[i].draw('mpl')
###Output
_____no_output_____
###Markdown
ExecutionThe ultimate goal to the execution of a circuit is to get the number of times each basis state is measured. Let's execute our `eval_circuits`. We can run many `QuantumCircuit`s at the same time by placing them into a `list`, which they already are!
###Code
execute_opts = {'shots' : 1024}
job = execute(eval_circuits, backend=qasm_simulator, **execute_opts)
result = job.result()
###Output
_____no_output_____
###Markdown
We can get the number of counts of each state for the execution of a given circuit with the follow lines. The counts are returned as a `dict`.
###Code
i = 0
#i = 1
counts = result.get_counts(eval_circuits[i])
print(counts)
###Output
{'0000': 266, '0001': 251, '0100': 262, '0101': 245}
###Markdown
If you `eval_circuits` are correct, you should get for, `i = 0` and `i = 1` respectively, something like this (exact value may vary since there is some randomness in the executation of a quantum circuit){'0000': 266, '0001': 262, '0100': 240, '0101': 256}{'0101': 1024} counts2arrayWe will transform this `dict` into an array with the `counts2array` method. Implement this method that will return the vector $N_q$. Be mindful of the order of the basis state.\begin{align} 0000, 0001, 0010, \ldots, 1110, 1111 \end{align}**optional remark** While doing this will allow us to interpret the counts with a simple inner product, this implies creating an array of size $2^n$ where $n$ is the numbers of qubits. This might not be such a good idea for larger systems and the use of a `dict` might be more appropriate. Can you interpret the counts efficiently while keeping them in a `dict`?
###Code
%autoreload
i = 0
counts = result.get_counts(eval_circuits[i])
evaluator.counts2array(counts)
###Output
_____no_output_____
###Markdown
For `i=0` in particular you should get something similar to:array([228., 276., 0., 0., 269., 251., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) Interpret countsThe action of interpreting the counts is actually the task of estimating the expectation value of a `PauliString` and multiplying by the coefficient associated with this `PauliString` in the LCPS. Implemented the `interpret_count_array` method that should return an array with the values for this expression.\begin{align} h_i \langle \hat{\mathcal{P}}_i \rangle = \frac{h_i}{N_\text{tot}}\sum_{q} N_q \Lambda_q^{(\hat{\mathcal{P}}_i)}\end{align}
###Code
%autoreload
i = 1
counts_array = evaluator.counts2array(result.get_counts(eval_circuits[i]))
interpreter = evaluator.interpreters[i]
expected_value = evaluator.interpret_count_array(interpreter,counts_array)
print(expected_value)
###Output
(-1+0j)
###Markdown
You should get something close to `0` for the first one and `-1` for the second. EvaluationYou have now all the pieces to complete the `eval(params)` method. This method should use all the methods you've implemented since the section *Methods called inside `eval(params)`* and then sum all the interpreted values. Mathematicaly, it should return the value of the expression\begin{align} E(\boldsymbol{\theta}) = \sum_i h_i \langle\psi(\boldsymbol{\theta}) | \hat{\mathcal{P}}_i | \psi(\boldsymbol{\theta}) \rangle.\end{align}
###Code
%autoreload
lcps = 2*PauliString.from_str('ZXZX') + 1*PauliString.from_str('IIZZ')
varform = varform_4qubits_1param
backend = qasm_simulator
execute_opts = {'shots' : 1024}
evaluator = BasicEvaluator(varform, backend, execute_opts=execute_opts)
evaluator.set_linear_combinaison_pauli_string(lcps)
params = [0,]
expected_value = evaluator.eval(params)
print(expected_value)
###Output
-1.0703125
###Markdown
Yes that's right, your code now returns an estimate of the expression\begin{align} E(\theta) = \langle \psi(\theta) | \hat{\mathcal{H}} | \psi(\theta) \rangle.\end{align} for\begin{align} \hat{\mathcal{H}} = 2\times \hat{Z}\hat{X}\hat{Z}\hat{X} + 1\times \hat{I}\hat{I}\hat{Z}\hat{Z}\end{align} and the varform `varform_4qubits_1param` for $\theta = 0$. The `evaluator.eval(params)` is now a method you can call like a function and it will return the energy $E(\theta)$.Now comes the time to test this on the $\text{H}_2$ molecule Hamiltonian! The Hamiltonian evaluation testWe will now import the classes from the previous activity.
###Code
from hamiltonian import MolecularFermionicHamiltonian
from mapping import JordanWigner
###Output
_____no_output_____
###Markdown
For ease of use we will import the integral values instead of using `pyscf`. We also import the Coulomb repulsion energy for later use. By now we are experts in building the Hamiltonian.
###Code
with open('Integrals_sto-3g_H2_d_0.7350_no_spin.npz','rb') as f:
out = np.load(f)
h1_load_no_spin = out['h1']
h2_load_no_spin = out['h2']
energy_nuc = out['energy_nuc']
molecular_hamiltonian = MolecularFermionicHamiltonian.from_integrals(h1_load_no_spin,h2_load_no_spin).include_spin()
###Output
_____no_output_____
###Markdown
We use the Jordan-Wigner mapping to the get the `LCPS` for the H2 molecule with `d=0.735`.
###Code
%autoreload
mapping = JordanWigner()
lcps_h2 = mapping.fermionic_hamiltonian_to_linear_combinaison_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
print(lcps_h2)
###Output
15 pauli strings for 4 qubits (Real, Imaginary)
IIII (-0.81055,+0.00000)
IIIZ (+0.17218,+0.00000)
IIZI (-0.22575,+0.00000)
IIZZ (+0.12091,+0.00000)
IZII (+0.17218,+0.00000)
IZIZ (+0.16893,+0.00000)
IZZI (+0.16615,+0.00000)
ZIII (-0.22575,+0.00000)
ZIIZ (+0.16615,+0.00000)
ZIZI (+0.17464,+0.00000)
ZZII (+0.12091,+0.00000)
XXXX (+0.04523,+0.00000)
XXYY (+0.04523,+0.00000)
YYXX (+0.04523,+0.00000)
YYYY (+0.04523,+0.00000)
###Markdown
We build an evaluator and feed it the `LCPS` of H2. And them we evaluate the energy. Use `params` in order that your `varform` prepares the state $|0101\rangle$.
###Code
%autoreload
varform = varform_4qubits_1param
backend = qasm_simulator
execute_opts = {'shots' : 1024}
evaluator = BasicEvaluator(varform,backend,execute_opts = execute_opts)
evaluator.set_linear_combinaison_pauli_string(lcps_h2)
params = [0,]
expected_value = evaluator.eval(params)
print(expected_value)
###Output
-1.8370563365153778
###Markdown
If your `varform` prepares the state $|0101\rangle$, you should get something around `-1.83`. This energy is already close to the ground state energy because the ground state is close to $|0101\rangle$, but still it's not the ground state. We need to find the `params` that will minimise the energy.\begin{align} E_0 = \min_{\boldsymbol{\theta}} E(\boldsymbol{\theta})\end{align} SolverIn a final step we need to implement a solver that will try to find the minimal energy. We will implement 2 solvers. The second is optional.- First the one using the VQE algo in conjunction with a minimizer to try to minimize `evaluator.eval(params)`.- Next we will make use of the `to_matrix()` method you implemented in the previous activity to find the exact value/solution. VQE SolverLike any minimzation process this solver will need a couple of ingredients :- A function to minimize, we will provide this with the evaluator- A minimizer, an algorithm that generaly takes in a function and a set of starting parameters and returns the best guess for the optimal parameters that correspond to the minimal value of the function to minimize.- A set of starting parameters. MinimizerA minimizer that works OK for the VQE algorithme is the Sequential Least SQuares Programming (SLSQP) algorithm. It's available in the `minimize` sub-module of [scipy](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-slsqp.html).
###Code
from scipy.optimize import minimize
###Output
_____no_output_____
###Markdown
We will make a Lambda function with the minimizer so we can set all sorts of parameter before feeding it to the solver.
###Code
minimizer = lambda fct, start_param_values : minimize(
fct,
start_param_values,
method = 'SLSQP',
options = {'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4, 'disp' : True, 'iprint' : 2})
###Output
_____no_output_____
###Markdown
The `minimizer` now takes only 2 arguments : the function and the starting parameters values. We also specify some options :- A small value for the maximum number of iteration. You will find that running the VQE algorithm is expensive because of the `evaluator.eval(params)` method. Either it's long to simulate on `qasm_simulator` or because it's running on an actual quantum computer.- A `eps` of `0.1`. This is the size of the step the algorithm is going to change the values of the parameters to try to estimate the slope of the function. By the way, a lot of minimizing algorithms use the slope of the function to know in which direction is the minimum. Since our parameters are all angles in radians a value of 0.1 seems reasonnable. Play with this value if you like.- A `ftol` value of `1e-4`. This is the goal for the precision of the value of the minimum value. The chemical accuracy is around 1 milli-Hartree.- We set `iprint` to `2` so see what is going on. For your final implementation you can set this to `0`.Before implementing the `VQESolver` let's try this minimizer! The function is `evaluator.eval` and we start with a parameter of `0`. This will take a while.
###Code
minimization_result = minimizer(evaluator.eval,[0,])
###Output
NIT FC OBJFUN GNORM
1 3 -1.834406E+00 2.276211E-01
2 6 -1.857755E+00 4.369813E-02
3 11 -1.860118E+00 1.790857E-02
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.8601177724457674
Iterations: 3
Function evaluations: 11
Gradient evaluations: 3
###Markdown
In the end you should get an minimal energy around `1.86` Hartree. Which is a bit smaller then what we had before minimizing. You can explore the `minimization_result` to retreive this value but also the set of optimal parameters.
###Code
opt_params = minimization_result.x
opt_value = minimization_result.fun
print(opt_params)
print(opt_value)
###Output
[-0.22882531]
-1.8601177724457674
###Markdown
VQE SolverNow you should be in good position to implement the `lowest_eig_value(lcps)` of the `VQESolver` class inside the `Solve.py` file. Test your method here.
###Code
from solver import VQESolver
%autoreload
vqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')
opt_value, opt_params = vqe_solver.lowest_eig_value(lcps_h2)
###Output
NIT FC OBJFUN GNORM
1 3 -1.839000E+00 2.553151E-01
2 6 -1.865535E+00 6.395731E-02
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.865591974724343
Iterations: 2
Function evaluations: 10
Gradient evaluations: 2
###Markdown
The is only one thing missing to have the complete molecular energy : the Coulomb repulsion energy from the nucleus. This value was loaded when we imported the integrals. Let's add it to the electronic energy.
###Code
print('Ground state position estimate (vqe) : ', opt_params)
print('Ground state energy estimate (electronic, vqe) : ', opt_value)
print('Ground state energy estimate (molecular, vqe) : ', opt_value + energy_nuc)
###Output
Ground state position estimate (vqe) : [-0.25526396]
Ground state energy estimate (electronic, vqe) : -1.865591974724343
Ground state energy estimate (molecular, vqe) : -1.1456229802753635
###Markdown
The EigenstateWhat is the eigenstate? We can partially find out by using the `varform` with the parameters we have found and measure everything in the Z basis.
###Code
eigenstate_qc = varform.copy()
eigenstate_qc.measure_all()
param_dict = dict(zip(eigenstate_qc.parameters,opt_params))
eigenstate_qc = eigenstate_qc.assign_parameters(param_dict)
eigenstate_qc.draw('mpl')
###Output
_____no_output_____
###Markdown
We now execute this circuit.
###Code
execute_opts = {'shots' : 1024}
job = execute(eigenstate_qc,backend=qasm_simulator,**execute_opts)
result = job.result()
counts = result.get_counts(eigenstate_qc)
###Output
_____no_output_____
###Markdown
We will use the `plot_histogram` method from `qiskit.visualization` that takes the counts `dict` as an input.
###Code
from qiskit.visualization import plot_histogram
plot_histogram(counts)
print(f"|a_0101| ~ {np.sqrt(counts['0101']/1024)}")
print(f"|a_1010| ~ {np.sqrt(counts['1010']/1024)}")
###Output
|a_0101| ~ 0.9936320684740404
|a_1010| ~ 0.11267347735824966
###Markdown
We see that the found solution is mostly the state $|0101\rangle$ which is the Hartree-Fock solution when the 2-body Hamiltonian is not present. Adding this 2-body physics, shifts the energy down a bit by introducing a small contribution of $|1010\rangle$. The actual statevector has a `-` sign between these two states.\begin{align}\alpha_{0101}|0101\rangle - \alpha_{1010}|1010\rangle\end{align}But this is not something we can know from this. Fortunatly, H2 is a small system which can be solved exactly and we can find out this phase. Exact Solver (optional)If you want to compare the value you get with the VQE algorithm it would be nice to have the exact value. If you were able to implement the `to_matrix()` method for `PauliString` and `LinearCombinaisonPauliString` then you can find the exact value of the ground state. All you need is to diagonalise the matrix reprensenting the whole Hamiltonian and find the lowest eigenvalue! Obviously this will not be possible to do for very large systems.
###Code
hamiltonian_matrix_h2 = lcps_h2.to_matrix()
eig_values, eig_vectors = np.linalg.eigh(hamiltonian_matrix_h2)
eig_order = np.argsort(eig_values)
eig_values = eig_values[eig_order]
eig_vectors = eig_vectors[:,eig_order]
ground_state_value, ground_state_vector = eig_values[0], eig_vectors[:,0]
print('Ground state vector (exact) : \n', ground_state_vector)
print('Ground state energy (electronic, exact) : ', ground_state_value)
print('Ground state energy (molecular, exact) : ', ground_state_value + energy_nuc)
###Output
Ground state vector (exact) :
[-0. -0.j -0. -0.j -0. -0.j -0. -0.j
-0. -0.j -0.9937604 -0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0.11153594+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j]
Ground state energy (electronic, exact) : -1.8572750302023788
Ground state energy (molecular, exact) : -1.137306035753399
###Markdown
Now you can complete the `ExactSolver` in the `Solver.py` file.
###Code
from solver import ExactSolver
%autoreload
exact_solver = ExactSolver()
ground_state_value, ground_state_vector = exact_solver.lowest_eig_value(lcps_h2)
print('Ground state vector (exact) : ', ground_state_vector)
print('Ground state energy (electronic, exact) : ', ground_state_value)
print('Ground state energy (molecular, exact) : ', ground_state_value + energy_nuc)
###Output
Ground state vector (exact) : [-0. -0.j -0. -0.j -0. -0.j -0. -0.j
-0. -0.j -0.9937604 -0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0.11153594+0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j]
Ground state energy (electronic, exact) : -1.8572750302023788
Ground state energy (molecular, exact) : -1.137306035753399
###Markdown
What are the two basis state involved in the ground state? Let's plot the state vector using `matplotlib`.
###Code
import matplotlib.pyplot as plt
fig,ax = plt.subplots(1,1)
i_max = np.argmax(np.abs(ground_state_vector))
state = ground_state_vector * np.sign(ground_state_vector[i_max])
ax.bar(range(len(state)), np.abs(state), color=(np.real(state) > 0).choose(['r','b']))
plt.xticks(range(len(state)),[f"{i:04b}" for i in range(len(state))], size='small',rotation=60);
###Output
_____no_output_____
###Markdown
What's next?Now that you can find the ground state for a specific H2 molecule configuration (`d = 0.735`), you should be able to do that for many configurations, say `d = 0.2` to `2.5`. Doing that will enable you to plot the so-called dissociation curve : energy vs distance. Do not forget to include the Coulomb repulsion energy of the nucleus!You could also run your algorithm on a noisy backend, either a noisy simulator or a real quantum computer. You've already seen on day 1 how to set/get a noisy backend. You'll see that noise messes things up pretty bad.Running on real machine will introduce the problem of the qubit layout. You might want to change the `initial_layout` in the `execute_opts` so that your `varform` is not applying CNOT gates between qubits that are not connected. You know this needs to insert SWAP gate and this introduce more noise. Also covered in day 1.To limit the effect of readout noise, you could add a `measure_filter` to your `evaluator`, so that each time you execute the `eval_circuits` you apply the filter to the results. Also covered in day 1.Implement the simulatneous evaluation for bitwise commuting cliques or even for general commuting cliques. Notebook by **Maxime Dion** For the QSciTech-QuantumBC virtual workshop on gate-based quantum computing Plot the H2 Dissociation Curve
###Code
from pyscf import gto
%autoreload
n = 50
distances = np.linspace(0.3, 2.5, n)
gs_energies_exact = np.zeros(n)
gs_energies_vqe = np.zeros(n)
energy_nuc = np.zeros(n)
# define mapping
mapping = JordanWigner()
# define minimizer
minimizer = lambda fct, start_param_values : minimize(
fct,
start_param_values,
method = 'SLSQP',
options = {'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4})
# instantiate an exact solver for comparison
exact_solver = ExactSolver()
# VQE setup
vqe_evaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator, execute_opts={'shots' : 1024})
vqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')
# try a range of internuclear distances
for i, distance in enumerate(distances): #units in AA
print('Trying Distance '+str(i+1), end="\r")
# build the molecule and basis functions
mol = gto.M(
atom = [['H', (0,0,-distance/2)], ['H', (0,0,distance/2)]],
basis = 'sto-3g'
)
# build the molecular Hamiltonian
molecular_hamiltonian = MolecularFermionicHamiltonian.from_pyscf_mol(mol).include_spin()
# map the Hamiltonian to a LCPS
lcps_h2 = mapping.fermionic_hamiltonian_to_linear_combinaison_pauli_string(molecular_hamiltonian).combine().apply_threshold().sort()
# store the nuclear energy
energy_nuc[i] = mol.energy_nuc()
# diagonalize the Hamiltonian to get energies
Eh2_exact, _ = exact_solver.lowest_eig_value(lcps_h2)
gs_energies_exact[i] = Eh2_exact + energy_nuc[i]
# get the vqe energy
Eh2_vqe, _ = vqe_solver.lowest_eig_value(lcps_h2)
gs_energies_vqe[i] = Eh2_vqe + energy_nuc[i]
print("Done! ", end="\r")
# plot dissociation curve of H2
fig, ax = plt.subplots(1, 1, figsize=(10,8))
ax.plot(distances, gs_energies_exact, c='tab:red', label='Exact', linewidth=5)
ax.plot(distances, gs_energies_vqe, '.', c='tab:blue', label='VQE', ms=20)
ax.set_xlabel(r'Internuclear Distance / $\AA$', fontsize=20)
ax.set_ylabel('Energy / $E_h$', fontsize=20)
ax.set_title('Dissociation Curve of H2', fontsize=28)
ax.legend()
fig.savefig('H2_dissociation.png')
plt.show()
# save these results
with open('h2_dissociation.npz','wb') as f:
np.savez(f, atom='H2', basis=mol.basis, distances=distances, energy_nuc=energy_nuc, gs_exact=gs_energies_exact,
gs_vqe=gs_energies_vqe, varform='varform_4qubits_1param', backend='qasm_simulator', execute_opts=execute_opts,
mapping='Jordan Wigner', initial_params=[0,], minimizer='SLSQP', minimizer_options={'maxiter' : 5,'eps' : 1e-1, 'ftol' : 1e-4})
# to reload any data...
with open('h2_dissociation.npz','rb') as f:
out = np.load(f, allow_pickle=True)
varform_load = out['varform']
basis_load = out['basis']
energy_nuc_load = out['energy_nuc']
gs_exact_load = out['gs_exact']
gs_vqe_load = out['gs_vqe']
execute_opts_load = out['execute_opts']
###Output
_____no_output_____
###Markdown
Now Let's Add A Realistic Noise Model to our Simulator
###Code
from qiskit import IBMQ
from qiskit.providers.aer.noise import NoiseModel
# IBMQ.save_account(TOKEN)
IBMQ.load_account()
IBMQ.providers()
#provider = IBMQ.get_provider(hub='ibm-q-education')
provider = IBMQ.get_provider(hub='ibm-q-education', group='qscitech-quantum', project='qc-bc-workshop')
provider2 = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
###Output
/Users/bhenders/opt/miniconda3/envs/qiskit/lib/python3.8/site-packages/qiskit/providers/ibmq/ibmqfactory.py:192: UserWarning: Timestamps in IBMQ backend properties, jobs, and job results are all now in local time instead of UTC.
warnings.warn('Timestamps in IBMQ backend properties, jobs, and job results '
ibmqfactory.load_account:WARNING:2021-02-01 14:15:45,480: Credentials are already in use. The existing account in the session will be replaced.
###Markdown
Let's Play Around With a Few Backends with different Topologies
###Code
bogota = provider.get_backend('ibmq_bogota')
# santiago = provider.get_backend('ibmq_santiago')
casablanca = provider.get_backend('ibmq_casablanca')
rome = provider.get_backend('ibmq_rome')
qasm_simulator = Aer.get_backend('qasm_simulator')
valencia = provider2.get_backend('ibmq_valencia')
melbourne = provider2.get_backend('ibmq_16_melbourne')
# Bogota
bogota_prop = bogota.properties()
bogota_conf = bogota.configuration()
bogota_nm = NoiseModel.from_backend(bogota_prop)
# Casablanca
casablanca_conf = casablanca.configuration()
casablanca_prop = casablanca.properties()
casablanca_nm = NoiseModel.from_backend(bogota_prop)
# Valencia
valencia_conf = valencia.configuration()
valencia_prop = valencia.properties()
valencia_nm = NoiseModel.from_backend(bogota_prop)
# Melbourne
melbourne_conf = melbourne.configuration()
melbourne_prop = melbourne.properties()
melbourne_nm = NoiseModel.from_backend(bogota_prop)
execute_opts = {'shots' : 1024,
'noise_model': bogota_nm,
'coupling_map':bogota_conf.coupling_map,
'basis_gates':bogota_conf.basis_gates}
evaluator = BasicEvaluator(varform,backend,execute_opts = execute_opts)
vqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver')
opt_energy, opt_params = vqe_solver.lowest_eig_value(lcps_h2)
execute_opts = {'shots' : 1024}
evaluator = BasicEvaluator(varform, bogota, execute_opts=execute_opts)
vqe_solver = VQESolver(evaluator, minimizer, [0,], name = 'vqe_solver_bogota')
opt_energy, opt_params = vqe_solver.lowest_eig_value(lcps_h2)
###Output
NIT FC OBJFUN GNORM
1 3 -1.615201E+00 3.814424E-01
2 10 -1.619136E+00 8.165022E-02
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.6191355224178043
Iterations: 2
Function evaluations: 10
Gradient evaluations: 2
###Markdown
Notice that the ground state energy of an $H_2$ molecule is found to be -1.653 $E_h$ when our circuit runs on a noisy backend, with the same qubit coupling map as the Bogota device. When we run on the actual Bogota backend, we obtain an even worse result of -1.619 $E_h$. We compare this to -1.866 $E_h$ when running on an ideal simulator with only statistical noise contributing to potential error. We can try to mitigate some of this discrepancy by applying a `MeasurementFilter` to our circuit measurements.**Plan of Attack:**1. Decide on an optimal qubit layout (or 2 good ones) to minimize extra CX gates and error-prone U2 gates.2. Generate **measurement calibration circuits** using these layouts.3. Add a measurement filter to the VQE evaluator4. Compare to un-filtered resultsNote that our circuit uses 4 qubits. Perhaps one of the biggest optimizations we could do would be using parity mapping to reduce qubit requirements. Qubit MappingKeep in mind our variational circuit:
###Code
qc = varform_4qubits_1param.assign_parameters({a: 1})
qc.draw()
###Output
_____no_output_____
###Markdown
Take a look at the coupling on Bogota, the actual machine we hope to use. Then examine the error rates for the CX and single qubit unitaries on the Bogota machine to get a sense of how we might better map our problem to this device.
###Code
bogota_conf.coupling_map
# Print CNOT error from Bogota calibration data
cx_errors = list(map(lambda cm: bogota_prop.gate_error("cx", cm), bogota_conf.coupling_map))
for i in range(len(bogota_conf.coupling_map)):
print(f' -> qubits {bogota_conf.coupling_map[i]} CNOT error: {cx_errors[i]}')
###Output
-> qubits [0, 1] CNOT error: 0.02502260182900265
-> qubits [1, 0] CNOT error: 0.02502260182900265
-> qubits [1, 2] CNOT error: 0.010158268780223023
-> qubits [2, 1] CNOT error: 0.010158268780223023
-> qubits [2, 3] CNOT error: 0.014415524414420677
-> qubits [3, 2] CNOT error: 0.014415524414420677
-> qubits [3, 4] CNOT error: 0.010503141811223582
-> qubits [4, 3] CNOT error: 0.010503141811223582
###Markdown
* CNOT gates between [0,1] and between [2,3] seem to have the largest error. Can we avoid these?
###Code
# Print U2 error from Bogota calibration data
u2_errors = list(map(lambda q: bogota_prop.gate_error("sx", q), range(bogota_conf.num_qubits)))
for i in range(bogota_conf.num_qubits):
print(f' -> qubits {i} U2 error: {u2_errors[i]}')
###Output
-> qubits 0 U2 error: 0.00031209152498965555
-> qubits 1 U2 error: 0.00029958716199301446
-> qubits 2 U2 error: 0.00017693377775244873
-> qubits 3 U2 error: 0.0004023787341145875
-> qubits 4 U2 error: 0.00016725621608793646
###Markdown
* Qubit 3 seems to have the largest error. Can we avoid using it?Let's experiment with several different qubit layouts to see if we can reduce the number of CX gates on problematic pairs and U2 gates on problematic qubits.
###Code
# all of these have optimal number of CNOTS (3)
layout = [2,3,1,4] # 1
layout = [1,2,0,3] # 2
layout = [3,2,4,1] # 3 Looks the most promising
layout = [2,1,3,0] # 4
# These are the equivalent topologies on Valencia
layout_valencia = [1,3,2,4]
layout_valencia = [1,3,4,2]
layout_valencia = [3,1,4,2]
layout_valencia = [3,1,2,4]
layout_valencia = [3,1,4,0]
layout_valencia = [1,3,4,0]
qc_l1 = transpile(qc,
coupling_map=bogota_conf.coupling_map,
basis_gates=bogota_conf.basis_gates,
initial_layout=layout,
optimization_level=1)
qc_l1.draw()
print(f'Original circuit depth: {qc.depth()} - Transpiled circuit depth: {qc_l1.depth()}')
###Output
Original circuit depth: 3 - Transpiled circuit depth: 6
###Markdown
**Summary of Findings**:* **Layout 1**: depth 6 with opt level 1 (no improvement for higher) - downside: uses cx between 0, 1, which has highest error rate, plus 4 U2s on q3, which has highest error rate. one cx on 2,3* **Layout 2**: depth 6 with opt level 1 (no improvement for higher) - downside: uses cx between 0, 1, which has highest error rate, one cx on 2,3* **Layout 3**: depth 6 with opt level 1 (no improvement for higher) - downside: One cx between 2,3* **Layout 4**: depth 6 with opt level 1 (no improvement for higher) - downside: uses cx between 0, 1, which has highest error rate, plus one on 2,3.**Conclusion**: Layout 3 is probably optimal. Now Create a Measurement Filter
###Code
from qiskit.circuit import QuantumRegister
from qiskit.ignis.mitigation.measurement import complete_meas_cal
# Generate the calibration circuits for the 3 qubits we measure
qr = QuantumRegister(4)
# we need our measurement filter to handle 4 qubits
qubit_list = [0,1,2,3]
# meas_calibs is a list containing 2^n circuits, one for each state.
meas_calibs, state_labels = complete_meas_cal(qubit_list=qubit_list, qr=qr, circlabel='mcal')
print(f'Number of circuits: {len(meas_calibs)}')
meas_calibs[1].draw()
# We need the filter to correspond to the layout we are using
calibration_layout = [3,2,4,1]
result = execute(meas_calibs,
qasm_simulator,
shots=8192,
noise_model=bogota_nm,
coupling_map=bogota_conf.coupling_map,
basis_gates=bogota_conf.basis_gates,
initial_layout=calibration_layout).result()
from qiskit.visualization import plot_histogram
# For example, plot histogram for circuit corresponding to state '001' (index 1)
plot_histogram(result.get_counts(meas_calibs[5]))
from qiskit.ignis.mitigation.measurement import CompleteMeasFitter
# Initialize the measurement correction fitter for a full calibration
meas_fitter = CompleteMeasFitter(result, state_labels)
# Get the filter object
meas_filter = meas_fitter.filter
fig, ax = plt.subplots(1,1, figsize=(10,8))
meas_fitter.plot_calibration(ax=ax)
fig.savefig('images/4_qubit_measurement_filter.svg')
###Output
_____no_output_____
###Markdown
Bogota Noise Model But With all Default Qubit Layout and No Measurement Filtersavefig
###Code
%autoreload
execute_opts = {'shots' : 1024,
'noise_model': bogota_nm,
'coupling_map':bogota_conf.coupling_map,
'basis_gates':bogota_conf.basis_gates,
}
evaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=None)
vqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')
energy_default, opt_params_default = vqe_solver.lowest_eig_value(lcps_h2)
###Output
NIT FC OBJFUN GNORM
1 3 -1.660741E+00 1.105373E-01
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.6647674403625325
Iterations: 1
Function evaluations: 10
Gradient evaluations: 1
###Markdown
Bogota Noise Model With An Improved Qubit Layout but no Measurement Filter
###Code
%autoreload
execute_opts = {'shots' : 1024,
'noise_model': bogota_nm,
'coupling_map':bogota_conf.coupling_map,
'basis_gates':bogota_conf.basis_gates,
'initial_layout': [3,2,4,1]
}
evaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=None)
vqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')
energy_layout, opt_params_layout = vqe_solver.lowest_eig_value(lcps_h2)
###Output
NIT FC OBJFUN GNORM
1 3 -1.684631E+00 2.120353E-01
2 8 -1.689694E+00 2.218513E-01
3 14 -1.697248E+00 3.986125E-02
4 18 -1.704745E+00 1.969790E-01
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.6806716594964146
Iterations: 4
Function evaluations: 29
Gradient evaluations: 4
###Markdown
Bogota Noise Model With An Improved Qubit Layout and Measurement Filter
###Code
%autoreload
execute_opts = {'shots' : 1024,
'noise_model': bogota_nm,
'coupling_map':bogota_conf.coupling_map,
'basis_gates':bogota_conf.basis_gates,
'initial_layout': [3,2,4,1]
}
evaluator = BasicEvaluator(varform_4qubits_1param, qasm_simulator,execute_opts=execute_opts, measure_filter=meas_filter)
vqe_solver = VQESolver(evaluator, minimizer, [0,], name='vqe_solver')
energy_layout_meas, opt_params_layout_meas = vqe_solver.lowest_eig_value(lcps_h2)
###Output
NIT FC OBJFUN GNORM
1 3 -1.789338E+00 7.476305E-02
2 6 -1.821437E+00 1.549079E-01
3 10 -1.825480E+00 8.058220E-02
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.8349246159398094
Iterations: 3
Function evaluations: 15
Gradient evaluations: 3
|
notebooks/00_dhs_prep.ipynb | ###Markdown
Demographic and Health Survey (DHS) Data PreparationDownload the Philippine National DHS Dataset from the [official website here](https://www.dhsprogram.com/what-we-do/survey/survey-display-510.cfm). Copy and unzip the file in the data directory. Importantly, the DHS folder should contain the following files:- `PHHR70DT/PHHR70FL.DTA`- `PHHR70DT/PHHR70FL.DO` Imports
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
File locations
###Code
data_dir = '../data/'
dhs_zip = data_dir + '<INSERT DHS FOLDER NAME HERE>/'
dhs_file = dhs_zip + 'PHHR70DT/PHHR70FL.DTA'
dhs_dict_file = dhs_zip + 'PHHR70DT/PHHR70FL.DO'
###Output
_____no_output_____
###Markdown
Helper Function
###Code
def get_dhs_dict(dhs_dict_file):
dhs_dict = dict()
with open(dhs_dict_file, 'r', errors='replace') as file:
line = file.readline()
while line:
line = file.readline()
if 'label variable' in line:
code = line.split()[2]
colname = ' '.join([x.strip('"') for x in line.split()[3:]])
dhs_dict[code] = colname
return dhs_dict
###Output
_____no_output_____
###Markdown
Load DHS Dataset
###Code
dhs = pd.read_stata(dhs_file, convert_categoricals=False)
dhs_dict = get_dhs_dict(dhs_dict_file)
dhs = dhs.rename(columns=dhs_dict).dropna(axis=1)
print('Data Dimensions: {}'.format(dhs.shape))
###Output
Data Dimensions: (27496, 342)
###Markdown
Aggregate Columns
###Code
data = dhs[[
'Cluster number',
'Wealth index factor score combined (5 decimals)',
'Education completed in single years',
'Has electricity'
]].groupby('Cluster number').mean()
data['Time to get to water source (minutes)'] = dhs[[
'Cluster number',
'Time to get to water source (minutes)'
]].replace(996, 0).groupby('Cluster number').median()
data.columns = [[
'Wealth Index',
'Education completed (years)',
'Access to electricity',
'Access to water (minutes)'
]]
print('Data Dimensions: {}'.format(data.shape))
data.head(2)
###Output
Data Dimensions: (1249, 4)
###Markdown
Save Processed DHS File
###Code
data.to_csv(data_dir+'dhs_indicators.csv')
###Output
_____no_output_____ |
2020/2020-10-31-ScipyJapan-Regex-to-DL/nb-01.ipynb | ###Markdown
Task 1: Create a corpusThe first task will be making your own training data based on the above format. We will work a small dataset that we've provided and later some publicly available ones but participants are expected to create their own for this part of the workshop Training DataThis is a sample format of the training data we want to use:```training_phrases = { 'when_is_check_in' : ['when is check-in','When can I check in?','when's checkin'], 'where_is_the_front_desk' : ['Where is the front desk?','what is the location of the front desk?'...]}}answers : { 'when_is_check_in' : 'Check in is at 3pm! :)', 'where_is_the_front_desk' : 'The front desk is located on the 2nd floor.'}```
###Code
import pandas as pd
import json
#get sample training data
!wget https://raw.githubusercontent.com/bespoke-inc/bespoke-public-talks/master/2020/2020-08-22-MLT-Rules-to-DL/training_sample.json
training_data = json.load(open('./training_sample.json','r'))
list(training_data.keys())
answers = {
'hotel.when_is_check_in': 'Check in is at 3pm!',
'hotel.when_is_check_out': 'Check out is at 10am!',
'hotel.is_there_late_check_out': 'For early check-out or late check-in please schedule beforehand',
'hotel.is_there_early_check_in': 'For early check-out or late check-in please schedule beforehand',
'hotel.where_is_the_front_desk_located': 'Front desk is located on the 2nd floor'
}
import re
punct_re_escape = re.compile('[%s]' % re.escape('!"#$%&()*+,./:;<=>?@[\\]^_`{|}~'))
class MyChatbotData:
def __init__(self, json_obj, text_fld, answers):
dfs = []
for i, (intent, data) in enumerate(json_obj.items()):
# lowercase and remove punctuation
patterns = data[text_fld].copy()
for i, p in enumerate(patterns):
p = p.lower()
p = self.remove_punctuation(p)
patterns[i] = p
answer = answers[intent]
df = pd.DataFrame(list(zip([intent]*len(patterns), patterns, [answer]*len(patterns))), \
columns=['intent', 'phrase', 'answer'])
dfs.append(df)
self.df = pd.concat(dfs)
def get_answer(self, intent):
return pd.unique(self.df[self.df['intent'] == intent]['answer'])[0]
def remove_punctuation(self, text):
return punct_re_escape.sub('', text)
def get_phrases(self, intent):
return list(self.df[self.df['intent'] == intent]['phrase'])
def get_intents(self):
return list(pd.unique(self.df['intent']))
def show_batch(self, size=5):
return self.df.head(size)
def __len__(self):
return len(self.df)
chatbot_data = MyChatbotData(training_data, 'patterns', answers)
chatbot_data.show_batch(10)
len(chatbot_data)
###Output
_____no_output_____
###Markdown
Rule-based intent ~~classification~~ matchingThe simplest approach to find if an query falls into a certain intent is to do some string comparison with our dataset.
###Code
UNK = "I don't know"
def exact_match(query):
intents = chatbot_data.get_intents()
for i in intents:
phrases = chatbot_data.get_phrases(i)
if query in phrases:
return chatbot_data.get_answer(i)
return UNK
exact_match("is there early check-in")
exact_match("when do i check in")
exact_match("can i check-in earlier than 12pm")
###Output
_____no_output_____
###Markdown
Preprocessing- CJK- normalize contractions- remove hyphens- remove stopwords- check for typos- normalize plurals- normalize ascii- normalize emojis- remove punctuation
###Code
import re
EMOJIS = [[':)', '😀'],[';)', '😉'],[':(', '😞'],[';((', '😢'],[':p', '😛']]
_emoji_re = '[\U00010000-\U0010ffff]+'
emoji_re = re.compile(_emoji_re, flags=re.UNICODE)
def emoji_normalize(text):
for e1, e2 in EMOJIS:
text = text.replace(e1, e2)
return text
def is_emoji(text):
emoji = "".join(re.findall(_emoji_re, text))
return emoji == text
def emoji_isolate(text):
EMJ = "__EMOJI__"
emoji_list = re.findall(_emoji_re, text)
text = emoji_re.sub(f" {EMJ} ", text)
new_str, ctr = [], 0
for tok in text.split():
if tok == EMJ:
new_str.append(emoji_list[ctr])
ctr += 1
else:
new_str.append(tok)
return " ".join(new_str).strip()
import unicodedata
def ascii_normalize(text):
return unicodedata.normalize('NFKD', text).encode('ascii', 'ignore').decode("utf-8")
punct_re_escape = re.compile('[%s]' % re.escape('!"#$%&()*+,./:;<=>?@[\\]^_`{|}~'))
def remove_punctuation(text):
punct_re_escape.sub('', text)
def preprocess(text):
text = ascii_normalize(text) or text
text = emoji_normalize(text) or text
text = emoji_isolate(text) or text
text = remove_punctuation(text) or text
return text
###Output
_____no_output_____
###Markdown
Partial String MatchingInstead of checking is the entire query string exists in our dataset, we try to find a partial matchand pick the intent that matches most closely. We will try to do this with using Levenshtein Distance to calculate the differences between sequences. The library [fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy)can help us do this
###Code
!pip install fuzzywuzzy
from fuzzywuzzy import process
def fuzzy_matching(query):
intents = chatbot_data.get_intents()
for i in intents:
phrases = chatbot_data.get_phrases(i)
match, score = process.extractOne(query, phrases)
if score > 90:
return chatbot_data.get_answer(i)
return UNK
fuzzy_matching("when do i check-in")
fuzzy_matching("can i check-in earlier than 12pm")
fuzzy_matching("what time is early check-in")
###Output
_____no_output_____
###Markdown
ML ClassificationWe will now add a probabilistic classifier to our set of methods to get better intent classification.The algorithm we will use is [naive bayes](https://scikit-learn.org/stable/modules/naive_bayes.html)Naive Bayes classifiers works quite well for small amount of training data. TokenizerBefore we feed it to our model for training we need to tokenize our training instances
###Code
import spacy
nlp = spacy.load('en_core_web_sm',parse=False,tagger=False)
doc = nlp("when can i check in?")
[tok.text for tok in doc]
doc = nlp("when can i check-in?")
[tok.text for tok in doc]
doc = nlp("i didn't")
[tok.text for tok in doc]
doc = nlp("thank you ありがとう")
[tok.text for tok in doc]
doc = nlp("didn't couldn't ")
[tok.text for tok in doc if tok.text.strip()]
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stopwords = set(stopwords.words('english'))
def tokenize_nd_join(text):
doc = nlp(text.lower())
return " ".join(tok.text for tok in doc if tok.text.strip() not in stopwords)
def get_xs_ys(train_data):
x, y = [], []
intents = chatbot_data.get_intents()
for i in intents:
phrases = chatbot_data.get_phrases(i)
x += [tokenize_nd_join(phrase) for phrase in phrases]
y += [i]*len(phrases)
return x, y
from sklearn.naive_bayes import ComplementNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
def train(x,y):
vect = CountVectorizer(ngram_range=(1,2),max_features=None)
nb = Pipeline([('vect',vect),('clf',ComplementNB(alpha=1.0,norm=False))])
nb.fit(x,y)
return nb
x, y = get_xs_ys(training_data)
nb_model = train(x, y)
def nb_pred(query):
tokenized_query = tokenize_nd_join(query)
pred = nb_model.predict([tokenized_query])[0]
return chatbot_data.get_answer(pred)
nb_pred("what time is early check-in")
import numpy as np
def nb_pred_top3(query):
tokenized_query = tokenize_nd_join(query)
pred_prob = nb_model.predict_proba([tokenized_query])
preds_sorted = np.argsort(pred_prob)
top3 = preds_sorted[:,-1],preds_sorted[:,-2],preds_sorted[:,-2]
if pred_prob[0,top3[0]] > (pred_prob[0,top3[1]] + pred_prob[0,top3[2]]):
pred = nb_model.named_steps['clf'].classes_[top3[0]][0]
return chatbot_data.get_answer(pred)
return UNK
nb_pred_top3("is there early check-in")
###Output
_____no_output_____
###Markdown
Intent Classification Pipeline
###Code
def get_pred(query):
query = query.lower()
pred = exact_match(query)
if pred == UNK: pred = exact_match(preprocess(query))
if pred == UNK: pred = nb_pred_top3(query)
if pred == UNK: pred = nb_pred_top3(preprocess(query))
if pred == UNK: pred = fuzzy_matching(query)
if pred == UNK: pred = fuzzy_matching(preprocess(query))
return pred
get_pred("when is check-in")
get_pred("can i check-out late?")
get_pred("where can i find the front desk?")
###Output
_____no_output_____
###Markdown
Moving ML to DLWe see that this pipeline using some rules and a probabilistic model are working quite well.However, it doesn't scale with data and requires adding a lot of preprocessing and nuances to get working properlyPros:- There are noticeable improvement in using NNs over the current probabilistic model.- Model can scale with data i.e it can improve as we add more annotated training data- This can be a good point to move to NNs, since we are reaching the limits of rule based systems e.g fewer engineered features- Simplified pipelineCons:- Huge gains cannot be seen until the data is cleaned. - In its current state, the model will either be the same or slightly better than the current approach.- Infrastructure changes Classification with Distil Bert
###Code
from fastai import *
from fastai.text import *
answers = {
'hotel.when_is_check_in': 'Check in is at 3pm!',
'hotel.when_is_check_out': 'Check out is at 10am!',
'hotel.is_there_late_check_out': 'For early check-out or late check-in please schedule beforehand',
'hotel.is_there_early_check_in': 'For early check-out or late check-in please schedule beforehand',
'hotel.where_is_the_front_desk_located': 'Front desk is located on the 2nd floor'
}
class MyChatbotData:
def __init__(self, json_obj, text_fld, answers):
dfs = []
for i, (intent, data) in enumerate(json_obj.items()):
# lowercase and remove punctuation
patterns = data[text_fld].copy()
for i, p in enumerate(patterns):
p = p.lower()
p = self.remove_punctuation(p)
patterns[i] = p
answer = answers[intent]
df = pd.DataFrame(list(zip([intent]*len(patterns), patterns, [answer]*len(patterns))), \
columns=['intent', 'phrase', 'answer'])
dfs.append(df)
self.df = pd.concat(dfs)
def get_answer(self, intent):
return pd.unique(self.df[self.df['intent'] == intent]['answer'])[0]
def remove_punctuation(self, text):
return punct_re_escape.sub('', text)
def get_phrases(self, intent):
return list(self.df[self.df['intent'] == intent]['phrase'])
def get_intents(self):
return list(pd.unique(self.df['intent']))
def show_batch(self, size=5):
return self.df.head(size)
def __len__(self):
return len(self.df)
training_data = json.load(open('./training_sample.json','r'))
list(training_data.keys())
chatbot_data = MyChatbotData(training_data, 'patterns', answers)
df = chatbot_data.df
df.head()
len(set(df['intent'])), len(df)
path = Path('./')
###Output
_____no_output_____
###Markdown
Adapting fastai code for training transformers was inspired from this [blog post](https://towardsdatascience.com/fastai-with-transformers-bert-roberta-xlnet-xlm-distilbert-4f41ee18ecb2)
###Code
from transformers import DistilBertForSequenceClassification, DistilBertTokenizer, DistilBertConfig
from transformers import PreTrainedTokenizer
class TransformersBaseTokenizer(BaseTokenizer):
def __init__(self, pretrained_tokenizer: PreTrainedTokenizer, model_type = 'bert', **kwargs):
self._pretrained_tokenizer = pretrained_tokenizer
self.max_seq_len = pretrained_tokenizer.max_len
self.model_type = model_type
def __call__(self, *args, **kwargs):
return self
def tokenizer(self, t:str) -> List[str]:
CLS = self._pretrained_tokenizer.cls_token
SEP = self._pretrained_tokenizer.sep_token
tokens = [CLS] + self._pretrained_tokenizer.tokenize(t)[:self.max_seq_len - 2] + [SEP]
return tokens
transformer_tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
transformer_base_tokenizer = TransformersBaseTokenizer(pretrained_tokenizer = transformer_tokenizer, model_type = model_type)
fastai_tokenizer = Tokenizer(tok_func = transformer_base_tokenizer, pre_rules=[], post_rules=[])
class TransformersVocab(Vocab):
def __init__(self, tokenizer: PreTrainedTokenizer):
super(TransformersVocab, self).__init__(itos = [])
self.tokenizer = tokenizer
def numericalize(self, t:Collection[str]) -> List[int]:
"Convert a list of tokens `t` to their ids."
return self.tokenizer.convert_tokens_to_ids(t)
def textify(self, nums:Collection[int], sep=' ') -> List[str]:
"Convert a list of `nums` to their tokens."
nums = np.array(nums).tolist()
return sep.join(self.tokenizer.convert_ids_to_tokens(nums)) if sep is not None else self.tokenizer.convert_ids_to_tokens(nums)
def __getstate__(self):
return {'itos':self.itos, 'tokenizer':self.tokenizer}
def __setstate__(self, state:dict):
self.itos = state['itos']
self.tokenizer = state['tokenizer']
self.stoi = collections.defaultdict(int,{v:k for k,v in enumerate(self.itos)})
transformer_vocab = TransformersVocab(tokenizer = transformer_tokenizer)
numericalize_processor = NumericalizeProcessor(vocab=transformer_vocab)
tokenize_processor = TokenizeProcessor(tokenizer=fastai_tokenizer,
include_bos=False,
include_eos=False)
transformer_processor = [tokenize_processor, numericalize_processor]
pad_idx = transformer_tokenizer.pad_token_id
databunch = (TextList.from_df(df, cols='phrase', processor=transformer_processor)
.split_by_rand_pct()
.label_from_df(cols= 'intent')
.databunch(bs=64, pad_first=False, pad_idx=pad_idx))
class TransformerModel(nn.Module):
def __init__(self, transformer):
super(TransformerModel,self).__init__()
self.transformer = transformer
def forward(self, input_ids):
# Return only the logits from the transfomer
logits = self.transformer(input_ids)[0]
return logits
config = DistilBertConfig.from_pretrained('distilbert-base-uncased')
config.num_labels = databunch.train_ds.c
distil_bert = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', config=config)
transformer_model = TransformerModel(distil_bert)
from transformers import AdamW
from functools import partial
CustomAdamW = partial(AdamW, correct_bias=False)
learn = Learner(databunch, transformer_model, opt_func = CustomAdamW, metrics=[accuracy])
list_layers = [learn.model.transformer.base_model.embeddings,
learn.model.transformer.base_model.transformer.layer[0],
learn.model.transformer.base_model.transformer.layer[1],
learn.model.transformer.base_model.transformer.layer[2],
learn.model.transformer.base_model.transformer.layer[3],
learn.model.transformer.base_model.transformer.layer[4],
learn.model.transformer.base_model.transformer.layer[5],
learn.model.transformer.pre_classifier,
learn.model.transformer.classifier]
learn.split(list_layers);
learn.freeze_to(-2)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(3,max_lr=1e-3,moms=(0.8,0.7))
learn.freeze_to(-3)
learn.fit_one_cycle(2, slice(8e-4/(2.6**4),8e-4), moms=(0.8,0.7))
learn.freeze_to(-5)
learn.fit_one_cycle(2, slice(5e-4/(2.6**4),5e-4), moms=(0.8,0.7))
learn.freeze_to(-8)
learn.fit_one_cycle(2, slice(3e-4/(2.6**4),3e-4), moms=(0.8,0.7))
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-4/(2.6**4),1e-4), moms=(0.8,0.7))
pred, _, probs = learn.predict("what time is check-in?")
print(learn.data.train_ds.y.classes[pred.data], probs)
interp = TextClassificationInterpretation(learn,*learn.get_preds(with_loss=True))
interp.show_top_losses(20)
###Output
_____no_output_____
###Markdown
Using Named Entity Recognition to extract parametersGeneralizing your training data with entity classesAdd named entities to one intent and use spaCy to extract
###Code
class ParameterModel():
def __init__(self, param_list):
self.parameters = param_list
self.label_ = 'PARAM'
def replace_entities(self, text):
"""Replace entities in the text with their respective labels"""
entity_replaced_text = text
for p in self.parameters:
if p in text:
entity_replaced_text = text.replace(p, f'<__{self.label_}__>')
return entity_replaced_text
food_list = ['japanese','indian','thai','chinese','fast food','bbq','cafe']
food_param_model = ParameterModel(food_list)
food_param_model.replace_entities("I want fast food")
MODEL_NAME = 'en_core_web_sm'
class SpacyModel(object):
spacy_model = None # Where we keep the model when it's loaded
@classmethod
def get_base_spacy_model(cls):
"""Get the base spacy model"""
if not cls.spacy_model:
cls.spacy_model = spacy.load(MODEL_NAME)
return cls.spacy_model
@classmethod
def replace_entities(cls, text):
"""Replace entities in the text with their respective labels"""
spacy_model = cls.get_base_spacy_model()
doc = spacy_model(text)
entity_replaced_text = text
for e in reversed(doc.ents):
start = e.start_char
end = start + len(e.text)
entity_replaced_text = entity_replaced_text[:start] + f'<__{e.label_}__>' + entity_replaced_text[end:]
return entity_replaced_text
SpacyModel.replace_entities("how do i get to tokyo")
SpacyModel.replace_entities("can i get a reservation for Sunday")
###Output
_____no_output_____
###Markdown
Using users to disambiguate intentsWhen two intents are good candidates for the user's text, instead of picking the best, ask the user which they meant.Add disambiguation ability in your bot TyposSince users are typing in their text, errors are common. Hence, the bot must be resilient to typosAdd typo correction in your bot Putting it all together
###Code
def replace_entities(text):
entity_replaced = SpacyModel.replace_entities(text)
entity_replaced = food_param_model.replace_entities(entity_replaced)
return entity_replaced
class TextFactory:
def __init__(self, text):
self.raw = text
self._sanitized = None
self._entity_replaced = None
self._sanitized_and_entity_replaced = None
def sanitized(self):
if not self._sanitized:
self._sanitized = preprocess(self.raw)
return self._sanitized
def entity_replaced(self):
if not self._entity_replaced:
self._entity_replaced = replace_entities(self.raw)
return self._entity_replaced
def typo_corrected(self):
#if not self._typo_corrected:
# self._typo_corrected = typo_correct(self.raw)
#return self._typo_corrected
pass
def sanitized_and_entity_replaced(self):
if not self._sanitized_and_entity_replaced:
self._sanitized_and_entity_replaced = replace_entities(self.sanitized())
return self._sanitized_and_typo_corrected
class ClassifierBuilder:
def __init__(self, query):
self.raw_query = query
self.queries = []
self.classifiers = []
def add_classifiers_query(self, classifier, query):
self.classifiers.append(classifier)
self.queries.append(query)
return self
def build(self):
# validation
if not self.classifiers:
raise Exception('Must specify classifiers')
# build
preds = None
for c in self.classifiers:
pred = c.predict(self.raw_query)
if pred != UNK:
return pred
return pred
class MyChatbotData:
def __init__(self, json_obj, text_fld, answers):
dfs = []
for i, (intent, data) in enumerate(json_obj.items()):
# lowercase and remove punctuation
patterns = data[text_fld].copy()
for i, p in enumerate(patterns):
p = p.lower()
patterns[i] = p
answer = answers[intent]
df = pd.DataFrame(list(zip([intent]*len(patterns), patterns, [answer]*len(patterns))), \
columns=['intent', 'phrase', 'answer'])
dfs.append(df)
self.df = pd.concat(dfs)
def get_answer(self, intent):
return pd.unique(self.df[self.df['intent'] == intent]['answer'])[0]
def remove_punctuation(self, text):
return punct_re_escape.sub('', text)
def get_phrases(self, intent):
return list(self.df[self.df['intent'] == intent]['phrase'])
def get_intents(self):
return list(pd.unique(self.df['intent']))
def show_batch(self, size=5):
return self.df.head(size)
def __len__(self):
return len(self.df)
class ExactMatch:
def __init__(self, data):
self.data = data
def predict(self, query):
intents = self.data.get_intents()
for i in intents:
phrases = self.data.get_phrases(i)
if query in phrases:
return i
return UNK
class FuzzyMatch:
def __init__(self, data):
self.data = data
def predict(self, query):
intents = self.data.get_intents()
for i in intents:
phrases = self.data.get_phrases(i)
match, score = process.extractOne(query, phrases)
if score > 90:
return chatbot_data.get_answer(i)
return UNK
class NaiveBayesMatch:
def __init__(self, data, model):
self.data = data
self.nb_model = model
def predict(self, query):
tokenized_query = tokenize_nd_join(query)
pred_prob = nb_model.predict_proba([tokenized_query])
preds_sorted = np.argsort(pred_prob)
top3 = preds_sorted[:,-1],preds_sorted[:,-2],preds_sorted[:,-2]
if pred_prob[0,top3[0]] > (pred_prob[0,top3[1]] + pred_prob[0,top3[2]]):
pred = nb_model.named_steps['clf'].classes_[top3[0]][0]
return pred
class DistilBertMatch:
def __init__(self, data, learner):
self.data = data
self.learner = learner
def predict(self, query):
pred, idx, probs = self.learner.predict(query)
return learn.data.train_ds.y.classes[pred.data]
exact_match_classifier = ExactMatch(chatbot_data)
fuzzy_match_classifier = FuzzyMatch(chatbot_data)
naive_bayes_classifier = NaiveBayesMatch(chatbot_data, nb_model)
distil_bert_classifier = DistilBertMatch(chatbot_data, learn)
class Predictor:
def __init__(self, query):
self.text_factory = TextFactory(query)
self.pipeline = ClassifierBuilder(query)
def predict(self):
pred = self.pipeline.add_classifiers_query(exact_match_classifier, self.text_factory.raw) \
.add_classifiers_query(exact_match_classifier, self.text_factory.sanitized()) \
.add_classifiers_query(exact_match_classifier, self.text_factory.entity_replaced()) \
.add_classifiers_query(distil_bert_classifier, self.text_factory.raw) \
.add_classifiers_query(distil_bert_classifier, self.text_factory.sanitized()) \
.add_classifiers_query(distil_bert_classifier, self.text_factory.entity_replaced()) \
.add_classifiers_query(fuzzy_match_classifier, self.text_factory.raw) \
.build()
return pred
predictor = Predictor("what time is check-in")
predictor.predict()
###Output
_____no_output_____
###Markdown
Metrics
###Code
from sklearn.model_selection import train_test_split
class MyChatbotData:
def __init__(self, json_obj, text_fld):
xs, ys, ans = [], [], []
for i, (intent, data) in enumerate(json_obj.items()):
# lowercase and remove punctuation
patterns = data[text_fld].copy()
xs += [p.lower() for p in patterns]
ys += [intent]* len(patterns)
(train_x, train_y), (test_x, test_y) = MyChatbotData.make_train_test_split(xs, ys)
self.train_df = pd.DataFrame(np.stack((train_x, train_y),axis=1), columns=['phrase','intent'])
self.test_df = pd.DataFrame(np.stack((test_x, test_y),axis=1), columns=['phrase','intent'])
@staticmethod
def make_train_test_split(xs, ys):
X_train, X_test, y_train, y_test = train_test_split(xs, ys, test_size=0.2, random_state=42)
return (X_train, y_train), (X_test, y_test)
def remove_punctuation(self, text):
return punct_re_escape.sub('', text)
def get_phrases(self, intent):
return list(self.train_df[self.train_df['intent'] == intent]['phrase'])
def get_intents(self):
return list(pd.unique(self.train_df['intent']))
def show_batch(self, size=5):
return self.train_df.head(size)
chatbot_data = MyChatbotData(training_data, 'patterns')
test_set = chatbot_data.test_df
test_set.head()
exact_match_classifier = ExactMatch(chatbot_data)
fuzzy_match_classifier = FuzzyMatch(chatbot_data)
naive_bayes_classifier = NaiveBayesMatch(chatbot_data, nb_model)
distil_bert_classifier = DistilBertMatch(chatbot_data, learn)
from sklearn.metrics import precision_recall_fscore_support
preds, labels = [], list(test_set['intent'])
for i in range(len(test_set)):
predictor = Predictor(test_set.iloc[i]['phrase'])
preds.append(predictor.predict())
precision_recall_fscore_support(labels, preds, average='weighted')
###Output
_____no_output_____ |
Programko_vzor.ipynb | ###Markdown
ProgramovanieLetná škola FKS 2018Maťo Gažo, Fero Dráček(& vykradnuté materiály od Mateja Badina, Feriho Hermana, Kuba, Peťa, Jarných škôl FX a kade-tade po internete)V tomto kurze si ukážeme základy programovania a naučíme sa programovať matematiku a fyziku.Takéto vedomosti sú skvelé a budete vďaka nim: * vedieť efektívnejšie robiť domáce úlohy* kvalitnejšie riešiť seminárové a olympiádové príklady* lepšie rozumieť svetu (IT je dnes na trhu najrýchlejšie rozvíjajúcim sa odvetvím)Počítač je blbý a treba mu všetko povedať a vysvetliť. Komunikovať sa s ním dá na viacerých úrovniach, my budeme používať Python. Python (názov odvodený z Monty Python's Flying Circus) je všeobecný programovací jazyk, ktorým sa dajú vytvárať webové stránky ako aj robiť seriózne vedecké výpočty. To znamená, že naučiť sa ho nie je na škodu a možno vás raz bude živiť.Rozhranie, v ktorom píšeme kód, sa volá Jupyter Notebook. Je to prostredie navrhnuté tak, aby sa dalo programovať doslova v prehliadači a aby sa kód dal kúskovať. Pre zbehnutie kúskov programu stačí stlačiť Shift+Enter. Dátové typy a operátory Čísla podľa očakávaní, vracia trojku
###Code
3
2+3 # scitanie
6-2 # odcitanie
10*2 # nasobenie
35/5 # delenie
5//3 # celociselne delenie TODO je toto treba?
7%3 # modulo
2**3 # umocnovanie
4 * (2 + 3) # poradie dodrzane
###Output
_____no_output_____
###Markdown
Logické výrazy
###Code
1 == 1 # logicka rovnost
2 != 3 # logicka nerovnost
1 < 10
1 > 10
2 <= 2
###Output
_____no_output_____
###Markdown
Premenné Toto je premenná.Po stlačení Shift+Enter program v okienku zbehne a premenná sa uloží do pamäte (RAMky, všetko sa deje na RAMke).
###Code
a = 2
###Output
_____no_output_____
###Markdown
Teraz s ňou možno pracovať ako s bežným číslom.
###Code
2 * a
a + a
a + a*a
###Output
_____no_output_____
###Markdown
Možno ju aj umocniť.
###Code
a**3
###Output
_____no_output_____
###Markdown
Pridajme druhú premennú.
###Code
b = 5
###Output
_____no_output_____
###Markdown
Nasledovné výpočty dopadnú podľa očakávaní.
###Code
a + b
a * b
b**a
###Output
_____no_output_____
###Markdown
Reálne čísla môžeme zobrazovať aj vo vedeckej forme: $2.3\times 10^{-3}$.
###Code
d = 2.3e-3
###Output
_____no_output_____
###Markdown
FunkcieSpravme si jednoduchú funkciu, ktorá za nás sčíta dve čísla, aby sme sa s tým už nemuseli trápiť my:
###Code
def scitaj(a, b):
return a + b
scitaj(10, 12) # vrati sucet
###Output
_____no_output_____
###Markdown
Funkcia funguje na celých aj reálnych číslach. Naša sčítacia funkcia má __štyri podstatné veci__:1. `def`: toto slovo definuje funkciu.2. dvojbodka na konci prvého riadku, odtiaľ začína definícia.3. Odsadenie kódu vnútri funkcie o štyri medzery.4. Samotný kód. V ňom sa môže diať čokoľvek, Python ho postupne prechádza.5. `return`: kľúčová vec. Za toto slovo sa píše, čo je output funkcie. Úloha 1Napíšte funkciu `priemer`, ktorá zoberie dve čísla (výšky dvoch chlapcov) a vypočíta ich priemernú výšku.Ak máš úlohu hotovú, prihlás sa vedúcemu.
###Code
# Tvoje riesenie:
def priemer(prvy, druhy):
return ((prvy+druhy)/2)
priemer(90,20)
###Output
_____no_output_____
###Markdown
Poďme na fyzikuV tomto momente môžeme začať používať Python ako sofistikovanejšiu kalkulačku a počítať ňou základné fyzikálne problémy. Jednoduchý príklad s ktorým začneme: dostanete zadaných niekoľko fyziklánych konštánt ako **premenné**.Predstavte si, že máte za úlohou vypočítať nejakú fyzikálnu veličinu pre niekoľko zadaných hodnôt. Veľmi pohodné je napísať si funkciu, do ktorej vždy zadáme počiatočné hodnoty. Zadané konštanty
###Code
kb=1.38064852e-23 # Boltzmanova konštanta
G=6.67408e-11 # Gravitačná konštanta
###Output
_____no_output_____
###Markdown
Úloha 2Napíšte funkciu kotrá spočíta gravitačnú silu medzi dvomi telesami pre zadanú vzdialenosť $r$ a hmotnosti $m_1$ a $m_2$.Pripomíname, vzorec pre výpočet gravitačnej sily je$F=G \frac{m_1 m_2}{r^2}$
###Code
# Tvoje riesenie:
def Sila(m_1, m_2, r):
F=G* m_1*m_2/r**2
return (F)
Sila(10,10,100)
###Output
_____no_output_____
###Markdown
Úloha 3Napíšte funkciu kotrá spočíta tlak v nádobe s objemom $V$, teplotou $T$ v ktorej je $N$ častíc.Pripomíname, vzorec pre výpočet tlaku je$p=\frac{N kb T}{V}$
###Code
# tvoje riesenie:
def tlak(N,T,V):
p=N*kb*T/V
return p
tlak(6e23, 270,1)
###Output
_____no_output_____
###Markdown
Úloha 4Napíšte funkciu ktorá vráti výslednú rýchlosť dvoch guličiek po dokonale pružnej zrážke. Funkciu má mať ako vstupné argumenty hmotnosti $m_1$, $m_2$ a rýchlosti $u_1$ a $u_2$ guličiek pred zrážkou. Výstupom budú nové rýchlosti $v_1$ a $v_2$. Hint: Využitím zákonu zachovania energie prídeme ku nasledujúcim výrazom pre nové rýchlosti.$v_1=\frac{u_1 (m_1-m_2)+2 m_2u_2}{m_1+m_2}$$v_2=\frac{u_2 (m_2-m_1)+2 m_1u_1}{m_1+m_2}$
###Code
# tvoje riesenie:
def zrazka(m_1,m_2,u_1,u_2):
return ((u_1 *(m_1-m_2)+2*m_2*u_2)/(m_1+m_2),(u_2 *(m_2-m_1)+2* m_1*u_1)/(m_1+m_2))
zrazka(1,1,10,-10)
###Output
_____no_output_____
###Markdown
ZoznamyZatiaľ sme sa zoznámili s číslami (celé, reálne), stringami a trochu aj logickými hodnotami.Zo všetkých týchto prvkov vieme vytvárať množiny, v informatickom jazyku `zoznamy`.Na úvod sa teda pozrieme, ako s vytvára zoznam (po anglicky `list`). Takúto vec všeobecne nazývame dátová štruktúra.
###Code
li = [] # prazdny list
ve = [4, 2, 3] # list s cislami
ve
ve[0] # indexovat zaciname nulou!
ve[1]
ve[-1] # vybratie posledneho prvku
w = [5, 10, 15]
###Output
_____no_output_____
###Markdown
Čo sa sa stane, ak zoznamy sčítame? Spoja sa.
###Code
ve + w
###Output
_____no_output_____
###Markdown
Môžeme ich násobiť?
###Code
ve * ve
###Output
_____no_output_____
###Markdown
Smola, nemôžeme. Ale všimnime si, aká užitočná je chybová hláška. Jasne nám hovorí, že nemožno násobiť `list`y. So zoznamami môžeme robiť rôzne iné užitočné veci. Napríklad ich sčítať.
###Code
sum(ve)
###Output
_____no_output_____
###Markdown
Alebo zistiť dĺžku:
###Code
len(ve)
###Output
_____no_output_____
###Markdown
Alebo ich utriediť: Alebo na koniec pridať nový prvok:
###Code
ve.append(10)
ve
###Output
_____no_output_____
###Markdown
Alebo odobrať: Interval Zoznam možno zadefinovať cez rozsah:
###Code
range(10)
type(range(10))
list(range(10))
list(range(3, 9))
###Output
_____no_output_____
###Markdown
Úloha 5Spočítajte:* súčet všetkých čísel od 1 do 1000.Vytvorte zoznam `letnaskola`, ktorý bude obsahovať vašich 5 obľúbených celých čísel. * Pridajte na koniec zoznamu číslo 100* Prepíšte prvé číslo v zozname tak, aby sa rovnalo poslednému v zozname.* Vypočítajte súčet prvého čísla, posledného čísla a dĺžky zoznamu.
###Code
# Tvoje riesenie:
zoznam = list(range(1,1001))
print(sum(zoznam))
letnaskola = [1,1995,12,6,42]
print(letnaskola)
letnaskola.append(100)
print(letnaskola)
letnaskola[0] = letnaskola[len(letnaskola)-1]
print(letnaskola)
print(letnaskola[0]+letnaskola[len(letnaskola)-1], len(letnaskola))
###Output
500500
[1, 1995, 12, 6, 42]
[1, 1995, 12, 6, 42, 100]
[100, 1995, 12, 6, 42, 100]
200 6
###Markdown
For cyklusIndexy zoznamu môžeme postupne prechádzať. For cyklus je tzv. `iterátor`, ktorý iteruje cez zoznam.
###Code
for i in [3,2,5,6]:
print(i)
for i in [3,2,5,6]:
print(i**2)
###Output
9
4
25
36
###Markdown
Ako úspešne vytvoriť For cyklus? Podobne, ako pri funkciách:* `for`: toto slovo je na začiatku.* `i`: iterovana velicina* `in`: pred zoznamom, cez ktorý prechádzame (iterujeme).* dvojbodka na konci prvého riadku.* kod, ktory sa cykli sa odsadzuje o štyri medzery. Za pomoci for cyklu môžeme takisto sčítať čísla. Napr. čísla od 0 do 100:
###Code
suma = 0
for i in range(101): # uvedomme si, preco tam je 101 a nie 100
suma = suma + i # skratene sum += i
print(suma)
###Output
5050
###Markdown
Hľadanie hodnoty zlatého rezu $\varphi$Jednoduché cvičenie na oboznámenie sa s tzv. selfkonzistentným problémom a for cyklom.Zlatý rez je možné nájsť ako riešienie rovnice$x=1+1/x$Jej riešenie vieme hladať postupným iterovaním
###Code
x = 1;
for i in range (0,20):
x = 1+1/x
print (x)
###Output
2.0
1.5
1.6666666666666665
1.6
1.625
1.6153846153846154
1.619047619047619
1.6176470588235294
1.6181818181818182
1.6179775280898876
1.6180555555555556
1.6180257510729614
1.6180371352785146
1.6180327868852458
1.618034447821682
1.618033813400125
1.6180340557275543
1.6180339631667064
1.6180339985218035
1.618033985017358
###Markdown
Úloha 6Spočítajte súčet druhých mocnín všetkých nepárnych čísel od 1 do 100 s využitím for cyklu.
###Code
# Tvoje riesenie:
suma = 0
for i in range(50):
suma = suma + (2*i+1)**2
print(suma)
###Output
166650
###Markdown
Úloha 7 Dvojhlavý tank (FKS 30.2.2.A2)Nevieme odkiaľ, no máme bombastický tank, ktorý má dve hlavne namierené opačným smerom – samozrejme tak, že nemieria proti sebe ;-). V tanku je$N = 42$ nábojov s hmotnosťou $m= 20$ kg. Tank s nábojmi váži dokopy$M= 43$ t. Potom tank začne strieľať striedavo z hlavní náboje rýchlosťou v= 1 000 m s frekvenciou strieľania $f= 0.2$ Hz. Keďže tank je nezabrzdený a dobre naolejovaný, začne sa pohybovať. Ako ďaleko od pôvodného miesta vystrelí posledný náboj? Akej veľkej chyby by sme sa dopustili,ak by sme zanedbali zmenu celkovej hmotnosti tanku počas strieľania?Hint: http://old.fks.sk/archiv/2014_15/30vzorakyLeto2.pdf , strana 10
###Code
x=0
m=20
Mtank=[43000]
vtank=[0]
v=-1000
f=0.2
for i in range(43):
x=x+vtank[-1]*1/f
vtank.append(vtank[-1]-m*v/(Mtank[-1]-m))
Mtank.append(Mtank[-1]-m)
v=-v
print(x)
###Output
48.83697212654822
###Markdown
PodmienkyPochopíme ich na príklade. Zmeňte `a` a zistite, čo to spraví.
###Code
a = 5
if a == 3:
print("cislo a je rovne trom.")
elif a == 5:
print("cislo a je rovne piatim")
else:
print("cislo a nie je rovne trom ani piatim.")
###Output
cislo a je rovne piatim
###Markdown
Za pomoci podmienky teraz môžeme z for cyklu vypísať napr. len párne čísla. Párne číslo identifikujeme ako také, ktoré po delení dvomi dáva zvyšok nula.Pre zvyšok po delení sa používa percento:
###Code
for i in range(10):
if i % 2 == 0:
print(i)
###Output
0
2
4
6
8
###Markdown
Cyklus mozeme zastavit, ak sa porusi nejaka podmienka
###Code
for i in range(20):
print(i)
if i>10:
print('Koniec.')
break
###Output
0
1
2
3
4
5
6
7
8
9
10
11
Koniec.
###Markdown
Hľadanie hodnoty Ludolfovho čísla $\pi$Pomocou Monte Carlo metódy integrovania sa naučíme ako napríklad vypočítať $\pi$.Nasledujúce príkazy vygenerujú zoznam náhodných čísel od nula po jeden
###Code
import random as rnd
import numpy as np
NOP = 50000
CoordXList = [];
CoordYList = [];
for j in range (NOP):
CoordXList.append(rnd.random())
CoordYList.append(rnd.random())
###Output
_____no_output_____
###Markdown
Tieto dva zoznamy použijeme ako $x$-ové a $y$-ové súradnice bodov v rovnine. Kedže náhodné rozdelenie bodov je rovnomerné, tak pomer bodov, ktoré sa nachádzajú vnútri štvrťkružnice s polomerom jedna ku všetkým bodom musí byť rovnaký ako pomer plochy štvrťkruhu a štvroca. Teda $$\frac{\frac{1}{4}\pi 1^2}{1^2}\stackrel{!}{=}\frac{N_{in}}{NOP}.$$Nasledujúce dve bunky vygenerujú obrázok rozloženia bodov a stvrťkružnicu
###Code
CircPhi = np.arange(0,np.pi/2,0.01)
import matplotlib.pyplot as plt
f1=plt.figure(figsize=(7,7))
plt.plot(
CoordXList,
CoordYList,
color = "red",
linestyle= "none",
marker = ","
)
plt.plot(np.cos(CircPhi),np.sin(CircPhi))
#plt.axis([0, 1, 0, 1])
#plt.axes().set_aspect('equal', 'datalim')
plt.show(f1)
###Output
_____no_output_____
###Markdown
Úloha 8Teraz je vašou úlohou spočítať $\pi$. Hint: bod je vnútri štvrťkružnice pokiaľ platí $x^2+y^2<1.$
###Code
#vase riesenie
NumIn = 0
for j in range (NOP):
#if (CoordXList[j] - 0.5)*(CoordXList[j] - 0.5) + (CoordYList[j] - 0.5)*(CoordYList[j] - 0.5) < 0.25:
if CoordXList[j]*CoordXList[j] + CoordYList[j]*CoordYList[j] <= 1:
NumIn = NumIn + 1;
NumIn/NOP*4
###Output
_____no_output_____
###Markdown
Numerické sčítavanieVo fyzike je často užitočné rozdeliť si problém na malé časti. Úloha 9Teraz je vašou úlohou vymyslieť ako spočítať gravitačné pole od jednorozmernej usečky o výške h nad stredom úsečky. Úsečka má hmotnosť $M$ a dĺžku $L$. ![Image of usecka](http://fks.sk/~fero/usecka.png)Úsečku si rozdelíte na $N$ malých dielov. Hmotnosť jedného takého dieliku je potom $$dm=\frac{M}{N}$$Vzdialenosť takéhoto bodu od stedu úsečky $x$.Potom gravitačná pole od toho malého kúsku v žiadanom bode je:$$\vec{\Delta g}=-G \frac{\Delta m}{r^3}\vec{r}.$$Rozdrobené na $y$-ovú a $x$-ovú zložku:$$\Delta g_y=-G \frac{\Delta m}{(x^2+h^2)}\cos(\phi)=-G \frac{\Delta m}{(x^2+h^2)}\frac{h}{\sqrt{x^2=h^2}},$$respektíve $$\Delta g_x=-G \frac{\Delta m}{(x^2+h^2)}\sin(\phi)=-G \frac{\Delta m}{(x^2+h^2)}\frac{x}{\sqrt{x^2+h^2}},$$Vašou úlohou je rozdeliť takúto úsečku a sčítať príspevky od všetkých malých kúskov. Premyslite si že kedže sme v strede úsečky, $x$-ové príspevky sa navzájom vynulujú.Ak vám to príde príliš jednoduché, tak možte naprogramovať program ktorých spočíta gravitačné pole nad lubovoľným bodom
###Code
N=1000
M=1000
L=2
h=1
#vase riesenie
g=0
for i in range(-int(N/2),int(N/2)):
g=g+G*M/N*(i/N)/((i/N)**2+h**2)**(3/2)
g
###Output
_____no_output_____
###Markdown
Obiehanie Zeme okolo SlnkaFyziku (dúfam!) všetci poznáme.* gravitačná sila:$$ \mathbf F(\mathbf r) = -\frac{G m M}{r^3} \mathbf r $$ Eulerov algoritmus (zlý)$$\begin{align}a(t) &= F(t)/m \\v(t+dt) &= v(t) + a(t) dt \\x(t+dt) &= x(t) + v(t) dt \\\end{align}$$ Verletov algoritmus (dobrý)$$ x(t+dt) = 2 x(t) - x(t-dt) + a(t) dt^2 $$
###Code
from numpy.linalg import norm
G = 6.67e-11
Ms = 2e30
Mz = 6e24
dt = 86400.0
N = int(365*86400.0/dt)
#print(N)
R0 = 1.5e11
r_list = np.zeros((N, 2))
r_list[0] = [R0, 0.0] # mozno miesat listy s ndarray
v0 = 29.7e3
v_list = np.zeros((N, 2))
v_list[0] = [0.0, v0]
# sila medzi planetami
def force(A, r):
return -A / norm(r)**3 * r
# Verletova integracia
def verlet_step(r_n, r_nm1, a, dt): # r_nm1 -- r n minus 1
return 2*r_n - r_nm1 + a*dt**2
# prvy krok je specialny
a = force(G*Ms, r_list[0])
r_list[1] = r_list[0] + v_list[0]*dt + a*dt**2/2
# riesenie pohybovych rovnic
for i in range(2, N):
a = force(G*Ms, r_list[i-1])
r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)
plt.plot(r_list[:, 0], r_list[:, 1])
plt.xlim([-2e11, 2e11])
plt.ylim([-2e11, 2e11])
plt.xlabel("$x$", fontsize=20)
plt.ylabel("$y$", fontsize=20)
plt.gca().set_aspect('equal', adjustable='box')
#plt.axis("equal")
plt.show()
###Output
_____no_output_____
###Markdown
Pridajme Mesiac
###Code
Mm = 7.3e22
R0m = R0 + 384e6
v0m = v0 + 1e3
rm_list = np.zeros((N, 2))
rm_list[0] = [R0m, 0.0]
vm_list = np.zeros((N, 2))
vm_list[0] = [0.0, v0m]
# prvy Verletov krok
am = force(G*Ms, rm_list[0]) + force(G*Mz, rm_list[0] - r_list[0])
rm_list[1] = rm_list[0] + vm_list[0]*dt + am*dt**2/2
# riesenie pohybovych rovnic
for i in range(2, N):
a = force(G*Ms, r_list[i-1]) - force(G*Mm, rm_list[i-1]-r_list[i-1])
am = force(G*Ms, rm_list[i-1]) + force(G*Mz, rm_list[i-1]-r_list[i-1])
r_list[i] = verlet_step(r_list[i-1], r_list[i-2], a, dt)
rm_list[i] = verlet_step(rm_list[i-1], rm_list[i-2], am, dt)
plt.plot(r_list[:, 0], r_list[:, 1])
plt.plot(rm_list[:, 0], rm_list[:, 1])
plt.xlabel("$x$", fontsize=20)
plt.ylabel("$y$", fontsize=20)
plt.gca().set_aspect('equal', adjustable='box')
plt.xlim([-2e11, 2e11])
plt.ylim([-2e11, 2e11])
plt.show() # mesiac moc nevidno, ale vieme, ze tam je
###Output
_____no_output_____
###Markdown
Úloha pre Vás: Treba pridať Mars :)Pridajte Mars! Matematické kyvadlo s odporom Nasimulujte matematické kyvadlo s odporom $\gamma$,$$ \ddot \theta = -\frac g l \sin\theta -\gamma \theta^2,$$za pomoci metódy `odeint`.Alebo pád telesa v odporovom prostredí:$$ a = -g - kv^2.$$
###Code
from scipy.integrate import odeint
def F(y, t, g, k):
return [y[1], g -k*y[1]**2]
N = 101
k = 1.0
g = 10.0
t = np.linspace(0, 1, N)
y0 = [0.0, 0.0]
y = odeint(F, y0, t, args=(g, k))
plt.plot(t, y[:, 1])
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$v(t)$", fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Harmonický oscilátor pomocou metódy Leapfrog (modifikácia Verletovho algoritmu)
###Code
N = 10000
t = linspace(0,100,N)
dt = t[1] - t[0]
# Funkcie
def integrate(F,x0,v0,gamma):
x = zeros(N)
v = zeros(N)
E = zeros(N)
# Počiatočné podmienky
x[0] = x0
v[0] = v0
# Integrovanie rovníc pomocou metódy Leapfrog (wiki)
fac1 = 1.0 - 0.5*gamma*dt
fac2 = 1.0/(1.0 + 0.5*gamma*dt)
for i in range(N-1):
v[i + 1] = fac1*fac2*v[i] - fac2*dt*x[i] + fac2*dt*F[i]
x[i + 1] = x[i] + dt*v[i + 1]
E[i] += 0.5*(x[i]**2 + ((v[i] + v[i+1])/2.0)**2)
E[-1] = 0.5*(x[-1]**2 + v[-1]**2)
# Vrátime riešenie
return x,v,E
# Pozrime sa na tri rôzne počiatočné podmienky
F = zeros(N)
x1,v1,E1 = integrate(F,0.0,1.0,0.0) # x0 = 0.0, v0 = 1.0, gamma = 0.0
x2,v2,E2 = integrate(F,0.0,1.0,0.05) # x0 = 0.0, v0 = 1.0, gamma = 0.01
x3,v3,E3 = integrate(F,0.0,1.0,0.4) # x0 = 0.0, v0 = 1.0, gamma = 0.5
# Nakreslime si grafy
plt.rcParams["axes.grid"] = True
plt.rcParams['font.size'] = 14
plt.rcParams['axes.labelsize'] = 18
plt.figure()
plt.subplot(211)
plt.plot(t,x1)
plt.plot(t,x2)
plt.plot(t,x3)
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(t,E1,label=r"$\gamma = 0.0$")
plt.plot(t,E2,label=r"$\gamma = 0.01$")
plt.plot(t,E3,label=r"$\gamma = 0.5$")
plt.ylim(0,0.55)
plt.ylabel("E(t)")
plt.xlabel("Čas")
plt.legend(loc="center right")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
A čo ak bude oscilátor aj tlmenný?
###Code
def force(f0,t,w,T):
return f0*cos(w*t)*exp(-t**2/T**2)
F1 = zeros(N)
F2 = zeros(N)
F3 = zeros(N)
for i in range(N-1):
F1[i] = force(1.0,t[i] - 20.0,1.0,10.0)
F2[i] = force(1.0,t[i] - 20.0,0.9,10.0)
F3[i] = force(1.0,t[i] - 20.0,0.8,10.0)
x1,v1,E1 = integrate(F1,0.0,0.0,0.0)
x2,v2,E2 = integrate(F1,0.0,0.0,0.01)
x3,v3,E3 = integrate(F1,0.0,0.0,0.1)
plt.figure()
plt.subplot(211)
plt.plot(t,x1)
plt.plot(t,x2)
plt.plot(t,x3)
plt.ylabel("x(t)")
plt.subplot(212)
plt.plot(t,E1,label=r"$\gamma = 0$")
plt.plot(t,E2,label=r"$\gamma = 0.01$")
plt.plot(t,E3,label=r"$\gamma = 0.1$")
pt.ylabel("E(t)")
plt.xlabel("Time")
plt.rcParams['legend.fontsize'] = 14.0
plt.legend(loc="upper left")
plt.show()
###Output
_____no_output_____ |
notebooks/Huang_Massa_compare_featurizations.ipynb | ###Markdown
MIT Open Source License: Copyright (c) 2018 Daniel C. Elton Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE. Load all libraries, read in data
###Code
%load_ext autoreload
%autoreload 2
from rdkit import Chem
from rdkit.Chem.EState.Fingerprinter import FingerprintMol
from rdkit.Chem import Descriptors
from rdkit.Chem.rdmolops import RDKFingerprint
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn import cross_validation
#from keras import regularizers
#from keras.models import Sequential
#from keras.layers import Dense, Activation, Dropout
#from keras_tqdm import TQDMNotebookCallback
from sklearn import cross_validation
from sklearn.kernel_ridge import KernelRidge
from sklearn.linear_model import Ridge, Lasso, LinearRegression, BayesianRidge
from sklearn.svm import SVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.neural_network import MLPRegressor
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA, FastICA
from rdkit.Avalon.pyAvalonTools import GetAvalonFP
def fp_Estate(mol):
return FingerprintMol(mol)[0][6:38]
#Read the data
data = pd.read_excel('../datasets/Huang_Massa_data_with_all_SMILES.xlsx', skipfooter=1)
target_prop = 'Explosive energy (kj/cc)'
#Add some new columns
data['Mols'] = data['SMILES'].apply(Chem.MolFromSmiles)
#data['Fingerprint'] = data['Mols'].apply(lambda x:GetAvalonFP(x, nBits=500))
data['Fingerprint'] = data['Mols'].apply(fp_Estate)
XEstate = np.array(list(data['Fingerprint']))
#important - add hydrogens!!
data['Mols'] = data['Mols'].apply(Chem.AddHs)
num_mols = len(data)
y = data[target_prop].values
from mmltoolkit.featurizations import *
num_atoms = []
for mol in data['Mols']:
mol = Chem.AddHs(mol)
num_atoms += [mol.GetNumAtoms()]
max_atoms = int(max(num_atoms))
X_Cmat_as_vec = np.zeros((num_mols, (max_atoms**2-max_atoms)//2 + max_atoms))
X_Cmat_eigs = np.zeros((num_mols, max_atoms))
X_Cmat_unsorted_eigs = np.zeros((num_mols, max_atoms))
X_summedBoB = []
filename_list = []
for i, refcode in enumerate(data['Molecular Name']):
filename = '../HM_all_xyz_files/'+refcode+'.xyz'
this_Cmat_eigs, this_Cmat_as_vec = coulombmat_and_eigenvalues_as_vec(filename, max_atoms )
this_Cmat_unsorted_eigs, this_Cmat_as_vec = coulombmat_and_eigenvalues_as_vec(filename, max_atoms, sort=False)
summed_BoB_feature_names, summedBoB = summed_bag_of_bonds(filename)
X_summedBoB += [summedBoB]
filename_list += [filename]
X_Cmat_eigs[i,:] = this_Cmat_eigs
X_Cmat_unsorted_eigs[i,:] = this_Cmat_eigs
X_Cmat_as_vec[i,:] = this_Cmat_as_vec
X_summedBoB = np.array(X_summedBoB)
bond_types, X_LBoB = literal_bag_of_bonds(list(data['Mols']))
X_AM_eigs = adjacency_matrix_eigenvalues(list(data['Mols']))
X_AM_eigs_BO = adjacency_matrix_eigenvalues(list(data['Mols']), useBO=True)
X_DM_eigs = distance_matrix_eigenvalues(list(data['Mols']))
X_CP_BO = characteristic_poly(list(data['Mols']), useBO=True)
BoB_feature_list, X_BoB = bag_of_bonds(filename_list, verbose=False)
###Output
_____no_output_____
###Markdown
Oxygen balance descriptors
###Code
data.columns
from mmltoolkit.descriptors import *
data['Oxygen Balance_1600'] = data['Mols'].apply(oxygen_balance_1600)
data['Oxygen Balance_100'] = data['Mols'].apply(oxygen_balance_100)
data['modified OB'] = data['Mols'].apply(modified_oxy_balance)
data['OB atom counts'] = data['Mols'].apply(return_atom_nums_modified_OB)
data['combined_nums'] = data['Mols'].apply(return_combined_nums)
X_OB100 = np.array(list(data['Oxygen Balance_100'])).reshape(-1,1)
X_OB1600 = np.array(list(data['Oxygen Balance_1600'])).reshape(-1,1)
X_OBmod = np.array(list(data['modified OB'])).reshape(-1,1)
X_OB_atom_counts = np.array(list(data['OB atom counts']))
X_combined = np.array(list(data['combined_nums']))
X_Estate_combined = np.concatenate((XEstate, X_combined), axis=1)
X_Estate_combined_Cmat_eigs = np.concatenate((X_Estate_combined, X_Cmat_eigs), axis=1)
X_Estate_combined_lit_BoB = np.concatenate((X_Estate_combined, X_LBoB), axis=1)
X_CustDesrip_lit_BoB = np.concatenate(( X_combined, X_LBoB), axis=1)
HOF = np.array(data['Delta Hf solid (kj/mol)'].values).reshape(-1,1)
densities =np.array(data['Density (g/cm3)'].values).reshape(-1,1)
X_s1 = np.concatenate(( X_LBoB, densities), axis=1)
X_s2 = np.concatenate(( X_LBoB, densities), axis=1)
st = StandardScaler()
X_s1 = st.fit_transform(X_s1)
X_s2 = st.fit_transform(X_s2)
###Output
_____no_output_____
###Markdown
Experiments with dimensionality reduction / embedding techniques
###Code
from sklearn.manifold import TSNE, SpectralEmbedding
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA, FastICA
from mmltoolkit.CV_tools import *
import warnings
warnings.filterwarnings("ignore")
def test_dimensionality_reduction(X, y, n_test=20, verbosity=0, plot = True):
if (verbosity == 2):
verbose = True
else:
verbose = False
ss = StandardScaler()
num_features = X.shape[1]
n_to_test = np.floor(np.linspace(2, 3*num_features//4, n_test))
technique_abs_errors = {
"PCA" : np.zeros((len(n_to_test),1)),
"PCA+ICA": np.zeros((len(n_to_test),1)),
"t-SNE": np.zeros((len(n_to_test),1)),
"spectral": np.zeros((len(n_to_test),1))
}
for (i_n, ncomp) in enumerate(n_to_test):
if (verbosity == 1): print("doing ", i_n, "of", len(n_to_test))
X_PCA = PCA(n_components=ncomp).fit_transform(X)
#X_PCA = ss.fit_transform(X_PCA)
X_PCA_ICA = FastICA().fit_transform(X_PCA)
#X_PCA_ICA = ss.fit_transform(X_PCA_ICA)
if (ncomp < 4):
X_tsne = TSNE(n_components=ncomp, learning_rate=100).fit_transform(X)
else:
X_tsne = TSNE(n_components=ncomp, method='exact', learning_rate=100).fit_transform(X)
#X_tsne = ss.fit_transform(X_tsne)
X_spectral = SpectralEmbedding(n_components=ncomp).fit_transform(X)
#X_spectral = ss.fit_transform(X_spectral)
dim_reduction_dict = {
"PCA" : X_PCA,
"PCA+ICA": X_PCA_ICA,
"t-SNE": X_tsne,
"spectral": X_spectral
}
for technique in dim_reduction_dict.keys():
scores_dict = tune_KR_and_test(dim_reduction_dict[technique], y, do_grid_search=True, verbose=verbose)
technique_abs_errors[technique][i_n] = -1*scores_dict['test_abs_err'].mean()
if (plot == True):
scores_dict2 = tune_KR_and_test(X_Estate_combined_lit_BoB, y, cv=KFold(n_splits=5,shuffle=True), do_grid_search=True, verbose=False)
score_base_KR = -1*scores_dict2['test_abs_err'].mean()
plt.figure(figsize=(8,6))
plt.clf()
for technique in technique_abs_errors.keys():
plt.plot(n_test, technique_abs_errors[technique], '-', label=technique)
plt.plot([min(n_test),max(n_test)],[score_base_KR, score_base_KR],'--', label='none')
plt.xlabel("num components", fontsize=19)
plt.ylabel("MAE (kJ/cc)", fontsize=19)
plt.legend(fontsize=15)
plt.ylabel
#plt.savefig('dimensionality_reduction_test_KR_EE.pdf')
plt.show()
return (n_to_test, technique_abs_errors)
scores_dict2 = tune_KR_and_test(X_Estate_combined_lit_BoB, y, cv=KFold(n_splits=5,shuffle=True), do_grid_search=True, verbose=False)
score_base_KR = -1*scores_dict2['test_abs_err'].mean()
plt.figure(figsize=(8,6))
plt.clf()
for technique in technique_abs_errors_scaled.keys():
plt.plot(n_test, technique_abs_errors[technique], '-', label=technique)
plt.plot([min(n_test),max(n_test)],[score_base_KR, score_base_KR],'--', label='none')
plt.xlabel("num components", fontsize=19)
plt.ylabel("MAE (kJ/cc)", fontsize=19)
plt.legend(fontsize=15)
plt.ylabel
plt.savefig('dimensionality_reduction_test_KR_EE.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Comparison of featurizations
###Code
featurization_dict = {#"stacking density" : X_s1,
#"stacking density & HOF" : X_s2,
# "truncated E-state": XEstate,
"Oxygen balance$_{100}$": X_OB100,
#"Oxygen balance$_{1600}$": X_OB1600,
#"Oxygen balance atom counts": X_OB_atom_counts,
#"custom descriptor set (CDS)": X_combined,
#"sum over bonds (SoB)" : X_LBoB,
#'E-state + custom descriptor set': X_Estate_combined,
#"Coulomb matrices as vec" : X_Cmat_as_vec,
#"Coulomb matrix eigenvalues": X_Cmat_eigs,
#"Bag of Bonds (BoB)": X_BoB,
#"Summed Bag of Bonds": X_summedBoB,
#"E-state + CDS + SoB":X_Estate_combined_lit_BoB,
#"CDS + SoB": X_CustDesrip_lit_BoB,
#"SoB + OB100": np.concatenate(( X_LBoB, X_OB100), axis=1)
#"Adjacency matrix eigenvalues": X_AM_eigs,
#"Adjacency matrix w/ bond order eigenvalues ": X_AM_eigs_BO,
# Characteristic polynomial of AM w BO": X_CP_BO,
# "Graph distance matrix eigenvalues": X_DM_eigs,
}
from mmltoolkit.featurization_comparison import *
import warnings
warnings.filterwarnings("ignore")
#targets = [ 'Density (g/cm3)', 'Delta Hf solid (kj/mol)', 'Explosive energy (kj/cc)', 'Shock velocity (km/s)',
# 'Particle velocity (km/s)', 'Speed of sound (km/s)', 'Pressure (Gpa)', 'T(K)', 'TNT Equiv (per cc)']
#units = ['g/cc', 'kJ/mol', 'kJ/cc', 'km/s', 'km/s', 'km/s', 'GPa', 'K', '']
#targets = ['Pressure (Gpa)']
#units = ['GPa']
targets = ['Explosive energy (kj/cc)']
units = ['kJ/cc']
for (i, target) in enumerate(targets):
y = data[target].values
test_featurizations_and_plot(featurization_dict, y, target_prop_name=target,
units=units[i], verbose=True, save_plot=False,
)
###Output
running Oxygen balance$_{100}$
doing outer fold 1 of 20
###Markdown
Experiments with feature importance ranking with random forrest
###Code
%matplotlib inline
from sklearn.model_selection import validation_curve, KFold
import matplotlib.pyplot as plt
num_to_try = np.linspace(2,80 , 10, dtype=int)
train_scores, valid_scores = validation_curve(RandomForestRegressor(),
X_LBoB, y,
"n_estimators",
num_to_try, cv=KFold(n_splits=5,shuffle=True), n_jobs=4,
scoring = 'neg_mean_absolute_error')
fig = plt.figure(figsize=(10,4))
plt.plot(num_to_try, -1*np.mean(valid_scores, axis=1),"-*")
plt.xlabel("n_estimators", fontsize=20)
plt.ylabel('Mean Abs. Error', fontsize=20)
plt.title('Tuning curve', fontsize=25)
plt.show()
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
rf = RandomForestRegressor(n_estimators=30)
rf = rf.fit(X_Estate_combined, y)
num_features = len(X_Estate_combined[0,:])
Estate_feature_names=[
"-CH3",
"=CH2",
"—CH2—",
"#CH",
"=CH-",
"aCHa",
">CH-",
"=c=",
"#C-",
"=C<",
"aCa",
"aaCa",
">C<",
"-NH3[+1]",
"-NH2",
"-NH2-[+1]",
"=NH",
"-NH-",
"aNHa",
"#N",
">NH-[+1]",
"=N—",
"aNa",
">N—",
"—N<<",
"aaNs",
">N<[+1]",
"-OH",
"=0",
"-0-",
"aOa",
"-F-"]
combined_descriptor_set_name = [
"OB_100",
"n_C",
"n_N",
"n_NO",
"n_COH",
"n_NOC",
"n_CO",
"n_H",
"n_F",
"n_N/n_C",
"n_CNO2",
"n_NNO2",
"n_ONO",
"n_ONO2",
"n_CNN",
"n_NNN",
"n_CNO",
"n_CNH2",
"n_CN(O)C",
"n_CF",
"n_CNF"
]
###Output
_____no_output_____
###Markdown
test of feature importance rank
###Code
from mmltoolkit.feature_importance import *
sorted_feature_names, sorted_values = LASSO_feature_importance(X_LBoB, y, bond_types, print_latex=True)
sorted_feature_names, sorted_values = random_forest_feature_importance(X_LBoB, y, bond_types, print_latex=True)
###Output
\begin{table}
\begin{tabular}{c c}
feature & coeff. \\
N#N & -0.337 \\
N-N & +0.220 \\
C-H & -0.188 \\
N=O & +0.184 \\
F-N & +0.161 \\
C:C & -0.156 \\
N:O & +0.155 \\
N=N & +0.123 \\
N-O & +0.100 \\
C:N & -0.061 \\
H-N & -0.059 \\
N:N & -0.052 \\
C-N & -0.038 \\
C-C & -0.032 \\
H-O & +0.028 \\
C-F & +0.027 \\
C=N & -0.013 \\
C=O & -0.009 \\
C-O & +0.002 \\
C=C & 0.000 \\
\end{tabular}
\caption{LASSO feature importances.}
\end{table}
\begin{table}
\begin{tabular}{c c}
feature & coeff. \\
C-H & 0.235 \\
C:C & 0.205 \\
N=O & 0.112 \\
C-C & 0.100 \\
N-O & 0.096 \\
C-N & 0.090 \\
N-N & 0.038 \\
C-O & 0.026 \\
H-N & 0.022 \\
F-N & 0.016 \\
N:N & 0.013 \\
C-F & 0.009 \\
C:N & 0.009 \\
C=O & 0.008 \\
N:O & 0.008 \\
N=N & 0.004 \\
H-O & 0.003 \\
C=N & 0.003 \\
C=C & 0.002 \\
N#N & 0.002 \\
\end{tabular}
\caption{random forest mean decrease impurity feature importances.}
\end{table}
|
Ch13_CV/13-3.ipynb | ###Markdown
http://preview.d2l.ai/d2l-en/master/chapter_computer-vision/bounding-box.html
###Code
%matplotlib inline
from d2l import mxnet as d2l
from mxnet import image, npx
npx.set_np()
d2l.set_figsize()
img = image.imread('../img/catdog.jpg').asnumpy()
d2l.plt.imshow(img);
# bbox is the abbreviation for bounding box
dog_bbox, cat_bbox = [60, 45, 378, 516], [400, 112, 655, 493]
#@save
def bbox_to_rect(bbox, color):
"""Convert bounding box to matplotlib format."""
# Convert the bounding box (top-left x, top-left y, bottom-right x,
# bottom-right y) format to matplotlib format: ((upper-left x,
# upper-left y), width, height)
return d2l.plt.Rectangle(
xy=(bbox[0], bbox[1]), width=bbox[2]-bbox[0], height=bbox[3]-bbox[1],
fill=False, edgecolor=color, linewidth=2)
fig = d2l.plt.imshow(img)
fig.axes.add_patch(bbox_to_rect(dog_bbox, 'blue'))
fig.axes.add_patch(bbox_to_rect(cat_bbox, 'red'));
###Output
_____no_output_____ |
Hacker Rank Problems.ipynb | ###Markdown
Given the names and grades for each student in a class of N students, store them in a nested list and print the name(s) of any student(s) having the second lowest grade.
###Code
if __name__ == '__main__':
student_list = []
scores = set()
second_lowest_names = []
for _ in range(int(input())):
name = input()
score = float(input())
student_list.append([name,score])
scores.add(score)
second_lowest = sorted(scores)[1]
for name,score in student_list:
if score == second_lowest:
second_lowest_names.append(name)
for names in sorted(second_lowest_names):
print(names,end='\n')
sorted(scores)
###Output
_____no_output_____
###Markdown
The provided code stub will read in a dictionary containing key/value pairs of name:[marks] for a list of students. Print the average of the marks array for the student name provided, showing 2 places after the decimal.
###Code
if __name__ == '__main__':
n = int(input())
student_marks = {}
for _ in range(n):
name, *line = input().split()
scores = list(map(float, line))
student_marks[name] = scores
query_name = input()
student_marks
names_list = []
for x in student_marks.keys():
names_list.append(x)
names_list
query_name = input()
for name_test in names_list:
if query_name == name_test:
else:
pass
marks_list = []
for y in student_marks.values():
marks_list.append(y)
marks_list
marks_list[0]
x = 0
for i in marks_list:
for i in marks_list.index(i):
x+=i
x
x/len(marks_list[0])
scores
###Output
_____no_output_____ |
6_visualize_trends_return_values.ipynb | ###Markdown
Visualize Trends in Return Values for a 1-in-10-year Event---**Project**: Masters Project **Author**: Nabig Chaudhry
###Code
# import necessary packages
import requests
import numpy as np
import pandas as pd
import xarray as xr
from datetime import datetime
import os as os
from scipy import stats
import statsmodels.api as sm
import lmoments3 as lm
from lmoments3 import distr as ldistr
import matplotlib.pyplot as plt
from matplotlib import colors
%matplotlib inline
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import geoviews as gv
import holoviews as hv
import hvplot.pandas
import hvplot.xarray
from bokeh.io import export_svgs
# import functions specifically for masters_project
%load_ext autoreload
%autoreload 2
import masters_project_functions as mp
###Output
_____no_output_____
###Markdown
Step 1: Pull Final Return Value Data For Analysis
###Code
os.listdir('./data/final_for_analysis/return_value')
import_path = './data/final_for_analysis/return_value'
cesm2_hist_1980_rv = xr.open_dataset(f'{import_path}/cesm2_hist_1980_rv.nc')
cesm2_ssp370_2020_rv = xr.open_dataset(f'{import_path}/cesm2_ssp370_2020_rv.nc')
cesm2_ssp370_2040_rv = xr.open_dataset(f'{import_path}/cesm2_ssp370_2040_rv.nc')
cesm2_ssp370_2060_rv = xr.open_dataset(f'{import_path}/cesm2_ssp370_2060_rv.nc')
cesm2_ssp370_2080_rv = xr.open_dataset(f'{import_path}/cesm2_ssp370_2080_rv.nc')
cnrm_hist_1980_rv = xr.open_dataset(f'{import_path}/cnrm_hist_1980_rv.nc')
cnrm_ssp370_2020_rv = xr.open_dataset(f'{import_path}/cnrm_ssp370_2020_rv.nc')
cnrm_ssp370_2040_rv = xr.open_dataset(f'{import_path}/cnrm_ssp370_2040_rv.nc')
cnrm_ssp370_2060_rv = xr.open_dataset(f'{import_path}/cnrm_ssp370_2060_rv.nc')
cnrm_ssp370_2080_rv = xr.open_dataset(f'{import_path}/cnrm_ssp370_2080_rv.nc')
###Output
_____no_output_____
###Markdown
Step 2: Pull Return Value Arrays for CESM2 Global Climate Model
###Code
cesm2_hist_1980_rv_array = mp.get_flat_array(cesm2_hist_1980_rv,
data_variable='return_value')
cesm2_hist_1980_rv_lower_array = mp.get_flat_array(cesm2_hist_1980_rv,
data_variable='conf_int_lower_limit')
cesm2_hist_1980_rv_upper_array = mp.get_flat_array(cesm2_hist_1980_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cesm2_hist_1980_df = pd.DataFrame({'return_value': cesm2_hist_1980_rv_array,
'return_value_lower': cesm2_hist_1980_rv_lower_array,
'return_value_upper': cesm2_hist_1980_rv_upper_array})
cesm2_ssp370_2020_rv_array = mp.get_flat_array(cesm2_ssp370_2020_rv,
data_variable='return_value')
cesm2_ssp370_2020_rv_lower_array = mp.get_flat_array(cesm2_ssp370_2020_rv,
data_variable='conf_int_lower_limit')
cesm2_ssp370_2020_rv_upper_array = mp.get_flat_array(cesm2_ssp370_2020_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cesm2_ssp370_2020_df = pd.DataFrame({'return_value': cesm2_ssp370_2020_rv_array,
'return_value_lower': cesm2_ssp370_2020_rv_lower_array,
'return_value_upper': cesm2_ssp370_2020_rv_upper_array})
cesm2_ssp370_2040_rv_array = mp.get_flat_array(cesm2_ssp370_2040_rv,
data_variable='return_value')
cesm2_ssp370_2040_rv_lower_array = mp.get_flat_array(cesm2_ssp370_2040_rv,
data_variable='conf_int_lower_limit')
cesm2_ssp370_2040_rv_upper_array = mp.get_flat_array(cesm2_ssp370_2040_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cesm2_ssp370_2040_df = pd.DataFrame({'return_value': cesm2_ssp370_2040_rv_array,
'return_value_lower': cesm2_ssp370_2040_rv_lower_array,
'return_value_upper': cesm2_ssp370_2040_rv_upper_array})
cesm2_ssp370_2060_rv_array = mp.get_flat_array(cesm2_ssp370_2060_rv,
data_variable='return_value')
cesm2_ssp370_2060_rv_lower_array = mp.get_flat_array(cesm2_ssp370_2060_rv,
data_variable='conf_int_lower_limit')
cesm2_ssp370_2060_rv_upper_array = mp.get_flat_array(cesm2_ssp370_2060_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cesm2_ssp370_2060_df = pd.DataFrame({'return_value': cesm2_ssp370_2060_rv_array,
'return_value_lower': cesm2_ssp370_2060_rv_lower_array,
'return_value_upper': cesm2_ssp370_2060_rv_upper_array})
cesm2_ssp370_2080_rv_array = mp.get_flat_array(cesm2_ssp370_2080_rv,
data_variable='return_value')
cesm2_ssp370_2080_rv_lower_array = mp.get_flat_array(cesm2_ssp370_2080_rv,
data_variable='conf_int_lower_limit')
cesm2_ssp370_2080_rv_upper_array = mp.get_flat_array(cesm2_ssp370_2080_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cesm2_ssp370_2080_df = pd.DataFrame({'return_value': cesm2_ssp370_2080_rv_array,
'return_value_lower': cesm2_ssp370_2080_rv_lower_array,
'return_value_upper': cesm2_ssp370_2080_rv_upper_array})
# create dataframe from all return value arrays
cesm2_rv_df = pd.DataFrame({'cesm2_hist_1980': cesm2_hist_1980_rv_array,
'cesm2_ssp370_2020': cesm2_ssp370_2020_rv_array,
'cesm2_ssp370_2040': cesm2_ssp370_2040_rv_array,
'cesm2_ssp370_2060': cesm2_ssp370_2060_rv_array,
'cesm2_ssp370_2080': cesm2_ssp370_2080_rv_array})
###Output
_____no_output_____
###Markdown
Step 3: Pull Return Value Arrays for CNRM-ESM2-1 Global Climate Model
###Code
cnrm_hist_1980_rv_array = mp.get_flat_array(cnrm_hist_1980_rv,
data_variable='return_value')
cnrm_hist_1980_rv_lower_array = mp.get_flat_array(cnrm_hist_1980_rv,
data_variable='conf_int_lower_limit')
cnrm_hist_1980_rv_upper_array = mp.get_flat_array(cnrm_hist_1980_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cnrm_hist_1980_df = pd.DataFrame({'return_value': cnrm_hist_1980_rv_array,
'return_value_lower': cnrm_hist_1980_rv_lower_array,
'return_value_upper': cnrm_hist_1980_rv_upper_array})
cnrm_ssp370_2020_rv_array = mp.get_flat_array(cnrm_ssp370_2020_rv,
data_variable='return_value')
cnrm_ssp370_2020_rv_lower_array = mp.get_flat_array(cnrm_ssp370_2020_rv,
data_variable='conf_int_lower_limit')
cnrm_ssp370_2020_rv_upper_array = mp.get_flat_array(cnrm_ssp370_2020_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cnrm_ssp370_2020_df = pd.DataFrame({'return_value': cnrm_ssp370_2020_rv_array,
'return_value_lower': cnrm_ssp370_2020_rv_lower_array,
'return_value_upper': cnrm_ssp370_2020_rv_upper_array})
cnrm_ssp370_2040_rv_array = mp.get_flat_array(cnrm_ssp370_2040_rv,
data_variable='return_value')
cnrm_ssp370_2040_rv_lower_array = mp.get_flat_array(cnrm_ssp370_2040_rv,
data_variable='conf_int_lower_limit')
cnrm_ssp370_2040_rv_upper_array = mp.get_flat_array(cnrm_ssp370_2040_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cnrm_ssp370_2040_df = pd.DataFrame({'return_value': cnrm_ssp370_2040_rv_array,
'return_value_lower': cnrm_ssp370_2040_rv_lower_array,
'return_value_upper': cnrm_ssp370_2040_rv_upper_array})
cnrm_ssp370_2060_rv_array = mp.get_flat_array(cnrm_ssp370_2060_rv,
data_variable='return_value')
cnrm_ssp370_2060_rv_lower_array = mp.get_flat_array(cnrm_ssp370_2060_rv,
data_variable='conf_int_lower_limit')
cnrm_ssp370_2060_rv_upper_array = mp.get_flat_array(cnrm_ssp370_2060_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cnrm_ssp370_2060_df = pd.DataFrame({'return_value': cnrm_ssp370_2060_rv_array,
'return_value_lower': cnrm_ssp370_2060_rv_lower_array,
'return_value_upper': cnrm_ssp370_2060_rv_upper_array})
cnrm_ssp370_2080_rv_array = mp.get_flat_array(cnrm_ssp370_2080_rv,
data_variable='return_value')
cnrm_ssp370_2080_rv_lower_array = mp.get_flat_array(cnrm_ssp370_2080_rv,
data_variable='conf_int_lower_limit')
cnrm_ssp370_2080_rv_upper_array = mp.get_flat_array(cnrm_ssp370_2080_rv,
data_variable='conf_int_upper_limit')
# create dataframe of return value and associated confidence interval
cnrm_ssp370_2080_df = pd.DataFrame({'return_value': cnrm_ssp370_2080_rv_array,
'return_value_lower': cnrm_ssp370_2080_rv_lower_array,
'return_value_upper': cnrm_ssp370_2080_rv_upper_array})
# create dataframe from all return value arrays
cnrm_rv_df = pd.DataFrame({'cnrm_hist_1980': cnrm_hist_1980_rv_array,
'cnrm_ssp370_2020': cnrm_ssp370_2020_rv_array,
'cnrm_ssp370_2040': cnrm_ssp370_2040_rv_array,
'cnrm_ssp370_2060': cnrm_ssp370_2060_rv_array,
'cnrm_ssp370_2080': cnrm_ssp370_2080_rv_array})
###Output
_____no_output_____
###Markdown
Step 4: Set Graphing Parameters for Box Plots
###Code
box_fill_color = '#fda847'
box_hover_color = 'blue'
y_axis_min=10
y_axis_max=60
width=650
height=400
columns = ['Historical 1980-99',
'SSP3-7.0 2020-39',
'SSP3-7.0 2040-59',
'SSP3-7.0 2060-79',
'SSP3-7.0 2080-99']
group_label='Time Periods'
value_label='Return Value (Degrees C)'
###Output
_____no_output_____
###Markdown
Step 5: Visualize Trends and Return Value Data Through Box Plots
###Code
df = cesm2_rv_df.rename(columns={'cesm2_hist_1980':'Historical 1980-99',
'cesm2_ssp370_2020':'SSP3-7.0 2020-39',
'cesm2_ssp370_2040':'SSP3-7.0 2040-59',
'cesm2_ssp370_2060':'SSP3-7.0 2060-79',
'cesm2_ssp370_2080':'SSP3-7.0 2080-99'})
title = 'CESM2 Return Values (Degrees C) for 1-in-10-year Event'
cesm2_box_plot = df.hvplot.box(y=columns,
ylim=(y_axis_min, y_axis_max),
width=width,
height=height,
title=title,
box_fill_color=box_fill_color,
box_hover_color=box_hover_color,
group_label=group_label,
value_label=value_label)
cesm2_box_plot
df = cnrm_rv_df.rename(columns={'cnrm_hist_1980':'Historical 1980-99',
'cnrm_ssp370_2020':'SSP3-7.0 2020-39',
'cnrm_ssp370_2040':'SSP3-7.0 2040-59',
'cnrm_ssp370_2060':'SSP3-7.0 2060-79',
'cnrm_ssp370_2080':'SSP3-7.0 2080-99'})
title = 'CNRM-ESM2-1 Return Values (Degrees C) for 1-in-10-year Event'
cnrm_box_plot = df.hvplot.box(y=columns,
ylim=(y_axis_min, y_axis_max),
width=width,
height=height,
title=title,
box_fill_color=box_fill_color,
box_hover_color=box_hover_color,
group_label=group_label,
value_label=value_label)
cnrm_box_plot
###Output
_____no_output_____
###Markdown
Step 6: Set Graphing Parameters for Line Plots
###Code
color_1 = '#fede81'
color_2 = '#fda847'
color_3 = '#fc5d2e'
color_4 = '#ee3123'
color_5 = '#900026'
y_axis_min=10
y_axis_max=60
width=850
height=450
label_1 = 'Historical 1980-99'
label_2 = 'SSP3-7.0 2020-39'
label_3 = 'SSP3-7.0 2040-59'
label_4 = 'SSP3-7.0 2060-79'
label_5 = 'SSP3-7.0 2080-99'
x_label='Index of Least to Greatest Return Values'
y_label='Return Value (Degrees C)'
###Output
_____no_output_____
###Markdown
Step 7: Visualize Trends and Return Value Data Through Line Plots
###Code
# sort return value dataframes from low to high
cesm2_hist_1980_sorted = cesm2_hist_1980_df.sort_values('return_value').reset_index(drop=True)
cesm2_ssp370_2020_sorted = cesm2_ssp370_2020_df.sort_values('return_value').reset_index(drop=True)
cesm2_ssp370_2040_sorted = cesm2_ssp370_2040_df.sort_values('return_value').reset_index(drop=True)
cesm2_ssp370_2060_sorted = cesm2_ssp370_2060_df.sort_values('return_value').reset_index(drop=True)
cesm2_ssp370_2080_sorted = cesm2_ssp370_2080_df.sort_values('return_value').reset_index(drop=True)
cnrm_hist_1980_sorted = cnrm_hist_1980_df.sort_values('return_value').reset_index(drop=True)
cnrm_ssp370_2020_sorted = cnrm_ssp370_2020_df.sort_values('return_value').reset_index(drop=True)
cnrm_ssp370_2040_sorted = cnrm_ssp370_2040_df.sort_values('return_value').reset_index(drop=True)
cnrm_ssp370_2060_sorted = cnrm_ssp370_2060_df.sort_values('return_value').reset_index(drop=True)
cnrm_ssp370_2080_sorted = cnrm_ssp370_2080_df.sort_values('return_value').reset_index(drop=True)
title = 'CESM2 Return Values (Degrees C) for 1-in-10-year Event'
hv_line = cesm2_hist_1980_sorted.hvplot.line(y='return_value',
ylim=(y_axis_min, y_axis_max),
width=width,
height=height,
title=title,
xlabel=x_label,
ylabel=y_label,
label=label_1).opts(color=color_1) * \
cesm2_ssp370_2020_sorted.hvplot.line(y='return_value',
label=label_2).opts(color=color_2) * \
cesm2_ssp370_2040_sorted.hvplot.line(y='return_value',
label=label_3).opts(color=color_3) * \
cesm2_ssp370_2060_sorted.hvplot.line(y='return_value',
label=label_4).opts(color=color_4) * \
cesm2_ssp370_2080_sorted.hvplot.line(y='return_value',
label=label_5).opts(color=color_5) * \
cesm2_hist_1980_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_2,
alpha=0.15).opts(color=color_2) * \
cesm2_ssp370_2020_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_2,
alpha=0.15).opts(color=color_2) * \
cesm2_ssp370_2040_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_3,
alpha=0.15).opts(color=color_3) * \
cesm2_ssp370_2060_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_4,
alpha=0.15).opts(color=color_4) * \
cesm2_ssp370_2080_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_5,
alpha=0.15).opts(color=color_5)
cesm2_line_plot = hv_line.opts(legend_position='right')
cesm2_line_plot
title = 'CNRM-ESM2-1 Return Values (Degrees C) for 1-in-10-year Event'
hv_line = cnrm_hist_1980_sorted.hvplot.line(y='return_value',
ylim=(y_axis_min, y_axis_max),
width=width,
height=height,
title=title,
xlabel=x_label,
ylabel=y_label,
label=label_1).opts(color=color_1) * \
cnrm_ssp370_2020_sorted.hvplot.line(y='return_value',
label=label_2).opts(color=color_2) * \
cnrm_ssp370_2040_sorted.hvplot.line(y='return_value',
label=label_3).opts(color=color_3) * \
cnrm_ssp370_2060_sorted.hvplot.line(y='return_value',
label=label_4).opts(color=color_4) * \
cnrm_ssp370_2080_sorted.hvplot.line(y='return_value',
label=label_5).opts(color=color_5) * \
cnrm_hist_1980_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_2,
alpha=0.15).opts(color=color_2) * \
cnrm_ssp370_2020_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_2,
alpha=0.15).opts(color=color_2) * \
cnrm_ssp370_2040_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_3,
alpha=0.15).opts(color=color_3) * \
cnrm_ssp370_2060_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_4,
alpha=0.15).opts(color=color_4) * \
cnrm_ssp370_2080_sorted.hvplot.area(y='return_value_lower',
y2='return_value_upper',
line_color=color_5,
alpha=0.15).opts(color=color_5)
cnrm_line_plot = hv_line.opts(legend_position='right')
cnrm_line_plot
###Output
_____no_output_____
###Markdown
Step 8: Export Return Value Trend Visualizations
###Code
export_int_image_path = './images/interactive/return_value'
export_stat_image_path = './images/static/return_value'
# export visualizations as interactive html images
hv.save(cesm2_line_plot, f'{export_int_image_path}/cesm2_line_plot.html')
hv.save(cnrm_line_plot, f'{export_int_image_path}/cnrm_line_plot.html')
# define function to export visualizations as static svg files
def export_svg(obj, filename):
plot_state = hv.renderer('bokeh').get_plot(obj).state
plot_state.output_backend = 'svg'
export_svgs(plot_state, filename=filename)
# export visualizations as static svg images
export_svg(cesm2_box_plot, f'{export_stat_image_path}/cesm2_box_plot.svg')
export_svg(cnrm_box_plot, f'{export_stat_image_path}/cnrm_box_plot.svg')
export_svg(cesm2_line_plot, f'{export_stat_image_path}/cesm2_line_plot.svg')
export_svg(cnrm_line_plot, f'{export_stat_image_path}/cnrm_line_plot.svg')
###Output
_____no_output_____ |
Additional Figures.ipynb | ###Markdown
Author distribution in Books3
###Code
import os
def extract_metadata_from_files(path_to_files):
with open(path_to_files, 'r') as f:
files = [x.strip() for x in f.readlines()]
metadata = []
for file in files:
basename = os.path.split(file)[1]
# drop extensions
filename = basename[:-9]
# if filename starts with a year, drop the first token
year = None
try:
year = int(filename[:4])
if 1800 < year < 2025:
filename = ' '.join(filename.split()[1:])
except ValueError:
pass
parts = filename.split(' - ')
parts = [part.strip().encode('ascii', 'ignore').decode('utf-8') for part in parts]
if len(parts) > 1:
title = parts[0]
author = parts[1]
else:
title = parts[0]
author = "Unknown"
metadata.append({"file": file, "title": title, "author": author})
return metadata
books3_metadata = extract_metadata_from_files("../books3/metadata/filenames.txt")
books3_metadata = pd.DataFrame(books3_metadata)
def read_names(infile):
names = set()
with open(infile) as f:
lines = f.readlines()
for line in lines:
line = line.split('(')[0]
line = line.split('[')[0]
line = line.split(',')[0]
name = line.strip().encode('ascii', 'ignore').decode('utf-8')
names.add(name)
return names
american_novelists = read_names(DATA_DIR / 'notebooks/authors/american_novelists.txt')
african_american_authors = read_names(DATA_DIR / 'notebooks/authors/african_american_authors_wikipedia.txt')
asian_american_authors = read_names(DATA_DIR / 'notebooks/authors/asian_american_authors.txt')
latin_american_authors = read_names(DATA_DIR / 'notebooks/authors/latin_american_authors.txt')
central_american_authors = read_names(DATA_DIR / 'notebooks/authors/central_american_authors.txt')
indigenous_authors = read_names(DATA_DIR / 'notebooks/authors/indigenous_authors.txt')
authors = {'American': list(american_novelists),
'African American': list(african_american_authors),
'Asian American': list(asian_american_authors),
'Latin American': list(latin_american_authors),
'Central American': list(central_american_authors),
'Indigenous': list(indigenous_authors),
'books3': list(books3_metadata.author.unique())}
all_authors = (american_novelists.union(african_american_authors)
.union(asian_american_authors)
.union(latin_american_authors)
.union(central_american_authors)
.union(indigenous_authors))
authors = pd.DataFrame.from_dict(authors, orient='index').T
data = []
for column in [x for x in authors.columns if x != 'books3']:
intersection = len(set(authors[column]) & set(authors.books3))
total = len(set(authors[column]))
data.append({'Demographic': column, 'intersection': intersection, 'total': total})
data= pd.DataFrame(data)
data['percent'] = data['intersection']/ data['total'] * 100
import seaborn as sns
sns.set(context='paper', style='white', font_scale=1.3)
sns.set_palette('colorblind')
ax = sns.barplot(data=data,
y='Demographic',
x='percent',
order=['American', 'Asian American', 'African American',
'Indigenous', 'Latin American', 'Central American'])
ax.set_xlabel("% of Wikipedia Author List covered by Books3")
plt.tight_layout()
plt.savefig('authors.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
OWTC distribution
###Code
import pandas as pd
urls = pd.read_json("../paper-data/metadata_full_fnames.jsonl", lines=True)
vals = urls.domain.value_counts(normalize=True)
num = int(len(vals) * 0.01)
cumulative_sum = np.cumsum(vals.values[:num])
sns.lineplot(x=[(x / len(vals)) * 100 for x in range(num)],
y=cumulative_sum)
cumulative_sum
vals.head(n=10)
urls.domain.value_counts().head(n=10)
###Output
_____no_output_____
###Markdown
Pulitzer Prizes
###Code
ff = load_and_score("../demix-data/fanfiction.jsonl")
ff['genre'] = 'fan fiction'
pp = load_and_score("../demix-data/pulitzer_prize_fiction.jsonl")
pp['genre'] = 'fiction'
ppoetry = load_and_score("../demix-data/pulitzer_prize_poetry.jsonl")
ppoetry['genre'] = 'poetry'
pnf = load_and_score("../demix-data/pulitzer_prize_nonfiction.jsonl")
pnf['genre'] = 'nonfiction'
pdra = load_and_score("../demix-data/pulitzer_prize_drama.jsonl")
pdra['genre'] = 'drama'
m = pd.concat([ff.sample(pp.shape[0]), pp, ppoetry, pnf, pdra], 0)
sns.set(style='white',font_scale=1.2,context='paper')
ax = sns.boxplot(data=m, x='genre', y='prob_high_quality', linewidth=2, order=['nonfiction', 'fiction', 'poetry', 'drama'])
plt.axhline(y=ff.prob_high_quality.median(), color='gray', linestyle='--')
plt.text(-0.45, 0.46, "median quality", fontsize=12,fontstyle='italic')
plt.text(-0.45, 0.41, "of BooksCorpus", fontsize=12,fontstyle='italic')
from matplotlib import patches
style = "Simple, tail_width=0.5, head_width=4, head_length=8"
kw = dict(arrowstyle=style, color="k")
a1 = patches.FancyArrowPatch((0.6, 0.42), (0.8, ff.prob_high_quality.median()),
connectionstyle="arc3,rad=.5", **kw)
for a in [a1]:
plt.gca().add_patch(a)
ax.set_ylabel("P(high quality)")
ax.set_xlabel("Pulitzer Prize Category")
plt.tight_layout()
# plt.savefig("pulitzer_prize.pdf", dpi=300, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
High and low factuality news
###Code
high_news = load_and_score(path=DATA_DIR / "articles-high-reliability-clean.jsonl")
low_news = load_and_score(path=DATA_DIR / "articles-low-reliability-clean.jsonl")
sns.histplot(high_news.prob_high_quality, label='News from High Factuality Sources', kde=True, stat='density', color='#00bfff', alpha=0.6)
ax = sns.histplot(low_news.prob_high_quality, label='News from Low Factuality Sources', kde=True, stat='density', color='#ff6961', alpha=0.6)
plt.legend()
ax.set_xlabel("P(high quality)")
plt.tight_layout()
plt.savefig("high_low_news.pdf", dpi=300, bbox_inches='tight')
from scipy import stats
stats.ks_2samp(high_news.prob_high_quality, low_news.prob_high_quality)
###Output
_____no_output_____ |
notebooks/child_example.ipynb | ###Markdown
Install pymt and the Child plug-in Use the conda command to install the complete CSDMS software stack,```bash$ conda install csdms-stack -c conda-forge -c csdms-stack``` Define a function to plot some child output. We'll use this later on.
###Code
import matplotlib.pyplot as plt
def plot_tris(x, y, tris, val):
plt.tripcolor(x, y, z, edgecolors='k', vmin=-200, vmax=200)
plt.gca().set_aspect('equal')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Run a general PyMT model Import the `Child` component from `pymt`. All of the components available to `pymt` are located in `pymt.components`.
###Code
from pymt.components import Child as Model
###Output
_____no_output_____
###Markdown
The following code is generic for all `pymt` components - not just `Child`. First we instantiate the component and then call its `setup` method to create a model simulation with any input files. `setup` takes an optional argument that gives a path to a folder that will contain the necessary input files (if not provided, a temporary folder will be used). `setup` return the name of the config file and the path to the folder containing it.
###Code
model = Model()
config_file, initdir = model.setup('_model')
###Output
_____no_output_____
###Markdown
Now that we have a valid config file, we can initialize the model, which brings it to the point that we can start time stepping.
###Code
model.initialize(config_file, dir=initdir)
###Output
_____no_output_____
###Markdown
We now update it for 10 time steps, and then clean up with the `finalize` method.
###Code
for _ in xrange(10):
model.update()
model.finalize()
###Output
_____no_output_____
###Markdown
Run the CHILD model Now we will run the Child model and look at some real output. As before, first import the `Child` class and instantiate it.
###Code
from pymt.components import Child as Model
model = Model()
###Output
_____no_output_____
###Markdown
We run `setup` and `initialize` as before but notice we can pass keywords to `setup`. In this case, we set the grid spacing to 500 m through a keyword to `setup`. Use `help(model)` to get a full list of possible keywords.
###Code
model.initialize(*model.setup('_model', grid_node_spacing=500.))
###Output
_____no_output_____
###Markdown
As with all `pymt` components you can get a list of the names of all the input and output variables. Here we get all of the output variables with the `output_var_names` attribute.
###Code
model.output_var_names
###Output
_____no_output_____
###Markdown
The `get_value` method takes the name of an output item and returns an array of values for that variable.
###Code
z = model.get_value('surface__elevation')
###Output
_____no_output_____
###Markdown
Every variable is attached to a grid that defines where variables are defined and how they are connected to their neighbors. Each grid is given an id, which is passed to the BMI `get_grid_*` methods. Use `get_var_grid` to get the grid id for a particular variable.
###Code
model.get_var_grid('surface__elevation')
###Output
_____no_output_____
###Markdown
With the grid id, we can now get information about that grid. For instance, the *x* and *y* coordinates of each grid node and how nodes and faces are connected to one another.
###Code
x, y = model.get_grid_x(0), model.get_grid_y(0)
tris = model.get_grid_face_node_connectivity(0).reshape((-1, 3))
###Output
_____no_output_____
###Markdown
We now use the function we defined at the top of this notebook to plot surface elevations on the child grid. We notice that child has initialized its elevations with some random noise.
###Code
plot_tris(x, y, tris, z)
###Output
_____no_output_____
###Markdown
We would like to run child with some real topography. To see if we able to set the elevations of child, we can look at the list of input variables. Fortunately, `surface__elevation` is an input variable, which means we can set it.
###Code
model.input_var_names
###Output
_____no_output_____
###Markdown
For this example, we'll set a shoreline at `y=10000.` and add some land and an ocean.
###Code
y_shore = 10000.
z[y > y_shore] += 100
z[y <= y_shore] -= 100
###Output
_____no_output_____
###Markdown
Use the `set_value` method to set the elevations, and then `get_value` to get them to make sure they were set correctly.
###Code
model.set_value('surface__elevation', z)
z = model.get_value('surface__elevation')
plot_tris(x, y, tris, z)
###Output
_____no_output_____
###Markdown
The elevations look good so lets run for 5000 years (this may take a minute or so - on my not-all-that-fast computer this takes about 30s).
###Code
model.update_until(5000.)
z = model.get_value('surface__elevation')
plot_tris(x, y, tris, z)
###Output
_____no_output_____
###Markdown
Let's do something a little fancier this time. Now we will advance the model for an additions 5000 years but, this time, every 100 years we'll uplift a block in the upper-right corner of the grid at a rate of .02 m/y.
###Code
dz_dt = .02
now = int(model.get_current_time())
for t in xrange(now, now + 5000, 100):
model.update_until(t)
z = model.get_value('surface__elevation')
z[(y > 15000) & (x > 10000)] += dz_dt * 100.
model.set_value('surface__elevation', z)
plot_tris(x, y, tris, z)
model.update_until(model.get_current_time() + 10000.)
z = model.get_value('surface__elevation')
plot_tris(x, y, tris, z)
###Output
_____no_output_____ |
Mathematics/1. fundamentals/22. filling jars.ipynb | ###Markdown
Animesh has N empty candy jars, numbered from 1 to N, with infinite capacity. He performs M operations. Each operation is described by 3 integers, a, b, and k. Here, a and b are indices of the jars, and k is the number of candies to be added inside each jar whose index lies between a and b (both inclusive). Can you tell the average number of candies after operations?
###Code
M,N = list(map(int,input().split()))
sum = 0
for _ in range(N):
a,b,k = list(map(int,input().split()))
sum += (b-a+1)*k
print(sum//M)
###Output
_____no_output_____ |
Home Assignment 3/Ex04_Lists.ipynb | ###Markdown
Exercises 04 - Lists 1. Second ElementComplete the function below according to its docstring.*HINT*: Python starts counting at 0.
###Code
def select_second(L):
"""Return the second element of the given list.
If the list has no second element, return None.
"""
return L[1] if len(L) >= 2 else 'None'
select_second('Earth')
select_second('a')
###Output
_____no_output_____
###Markdown
2. Captain of the Worst TeamYou are analyzing sports teams. Members of each team are stored in a list. The **Coach** is the first name in the list, the **Captain** is the second name in the list, and other players are listed after that. These lists are stored in another list, which starts with the best team and proceeds through the list to the worst team last. Complete the function below to select the **captain** of the worst team.
###Code
def losing_team_captain(teams):
"""Given a list of teams, where each team is a list of names, return the 2nd player (captain)
from the last listed team
"""
return teams [-1][1]
losing_team_captain([['A','B','C','D'],
['Joey','Marcus','Tim','Jim']
])
###Output
_____no_output_____
###Markdown
3. Purple Shell itemThe next iteration of Mario Kart will feature an extra-infuriating new item, the ***Purple Shell***. When used, it warps the last place racer into first place and the first place racer into last place. Complete the function below to implement the Purple Shell's effect.
###Code
def purple_shell(racers):
"""Given a list of racers, set the first place racer (at the front of the list) to last
place and vice versa.
>>> r = ["Mario", "Bowser", "Luigi"]
>>> purple_shell(r)
>>> r
["Luigi", "Bowser", "Mario"]
"""
racers[0],racers[-1] = racers[-1],racers[0]
return racers
purple_shell(["Mario", "Bowser", "Luigi"])
###Output
_____no_output_____
###Markdown
4. Guess the Length!What are the lengths of the following lists? Fill in the variable `lengths` with your predictions. (Try to make a prediction for each list *without* just calling `len()` on it.)
###Code
a = [1, 2, 3]
b = [1, [2, 3]] #[2,3] count as 1 item
c = []
d = [1, 2, 3][1:]
# Put your predictions in the list below. Lengths should contain 4 numbers, the
# first being the length of a, the second being the length of b and so on.
lengths = [3,2,0,2]
###Output
_____no_output_____
###Markdown
5. Fashionably Late 🌶️We're using lists to record people who attended our party and what order they arrived in. For example, the following list represents a party with 7 guests, in which Adela showed up first and Ford was the last to arrive: party_attendees = ['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert', 'Ford']A guest is considered **'fashionably late'** if they arrived after at least half of the party's guests. However, they must not be the very last guest (that's taking it too far). In the above example, Mona and Gilbert are the only guests who were fashionably late.Complete the function below which takes a list of party attendees as well as a person, and tells us whether that person is fashionably late.
###Code
def fashionably_late(arrivals, name):
"""Given an ordered list of arrivals to the party and a name, return whether the guest with that
name was fashionably late.
"""
#get the index -> more than 1/2 len of arrivals and the index must not be the last one
total = len(arrivals)
arrival_name = arrivals.index(name)
return (arrival_name > total/2) and (arrival_name)!= total -1
fashionably_late(['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert', 'Ford'],'Mona')
fashionably_late(['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert', 'Ford'],'Gilbert')
fashionably_late(['Adela', 'Fleda', 'Owen', 'May', 'Mona', 'Gilbert', 'Ford'],'Ford')
###Output
_____no_output_____ |
LinkedIn_Web_Scrape.ipynb | ###Markdown
###Code
%pip install selenium
%pip install parsel
!apt update
!apt install chromium-chromedriver
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',options=options)
# install packages
import selenium as se
import csv
from time import sleep
import xlrd
import pandas as pd
from selenium.webdriver.common.keys import Keys
from parsel import Selector
import re
# defining new variable passing two parameters
writer = csv.writer(open('Spring 2019 Grad Placement_6.csv', 'w+', encoding = 'utf-8-sig', newline = ''))
# writerow() method to the write to the file object
writer.writerow(['Name', 'Company', 'Title', 'Start_time', 'URL'])
# writerow() method to the write to the file object
writer.writerow(['Name', 'Company', 'Title', 'Start_time', 'URL'])
# import web driver
from selenium import webdriver
# driver.get method() will navigate to a page given by the URL address
driver.get('https://www.linkedin.com')
# locate email form by_class_name
username = driver.find_element_by_name('session_key')
# send_keys() to simulate key strokes
username.send_keys('[email protected]')
sleep(0.5)
# locate password form by_class_name
password = driver.find_element_by_name('session_password')
# send_keys() to simulate key strokes
password.send_keys("niixS$$4zDHyGc'")
sleep(0.5)
# locate submit button by_class_name
log_in_button = driver.find_element_by_class_name("sign-in-form__submit-button")
# .click() to mimic button click
log_in_button.click()
sleep(0.5)
#import names data:
data = pd.read_excel('/content/drive/MyDrive/Web_Scraping_Workshop/Spring 2019 Grads Data.xlsx')
names = data["FULL_NAME"]
feed_list = []
# print('"site:linkedin.com/in/', 'AND', '"',i,'"', 'AND "Business Analytics"', 'AND "University of Texas at Dallas"",')
#search term
for i in names:
a = "site:linkedin.com/in/"
b = "AND"
c = i
d = "AND"
e = '"Business Analytics"'
s = " "
f = '"Dallas"'
result = a + s + c + s + e + s + f
feed_list.append(result)
# check if we made all search term for each names
print(len(feed_list))
print(feed_list[0])
feed_list
len(feed_list)
driver.get('https:www.google.com')
sleep(3)
search_query = driver.find_element_by_name('q')
search_query.send_keys(feed_list[2])
sleep(0.5)
search_query.send_keys(Keys.RETURN)
sleep(3)
linkedin_urls = driver.find_elements_by_tag_name('a')
linkedin_urls = [url.get_attribute('href') for url in linkedin_urls]
linkedin_urls[31]
sleep(0.5)
# use the first - best fit - search
driver.get(linkedin_urls[31])
# add a 5 second pause loading each URL
sleep(5)
# assigning the source code for the webpage to variable sel
sel = Selector(text=driver.page_source)
# xpath to extract the text from the class containing the name
name = sel.xpath('//*[starts-with(@class, "inline t-24 t-black t-normal break-words")]/text()').extract_first()
if name:
name = name.strip()
# xpath to extract the text from the class containing the company name
company = sel.xpath('//*[starts-with(@class, "pv-entity__secondary-title t-14 t-black t-normal")]/text()').extract_first()
if company:
company = company.strip()
# xpath to extract the text from the class containing the title
title = sel.xpath('//*[starts-with(@class, "t-16 t-black t-bold")]/text()').extract_first()
if title:
title = title.strip()
# xpath to extract the text from the class containing the time
time = sel.xpath('//*[starts-with(@class, "pv-entity__date-range t-14 t-black--light t-normal")]/span[2]').extract_first()
if time:
time = time.strip("<span>")
time = time.strip("</")
#url
linkedin_url = driver.current_url
writer.writerow([name,
company,
title,
time,
linkedin_url])
# URL to use in Selenium
driver.get('https://www.boxofficemojo.com/year/?ref_=bo_nb_di_secondarytab')
sel = Selector(text=driver.page_source)
# xpath to extract the text from the class containing the name
name = sel.xpath('//*[starts-with(@class, "a-link-normal")]/text()').getall()
if name:
name = name
name
print (name[9])
for x in feed_list[110:304]:
driver.get('https:www.google.com')
sleep(3)
search_query = driver.find_element_by_name('q')
search_query.send_keys(x)
sleep(0.5)
search_query.send_keys(Keys.RETURN)
sleep(3)
linkedin_urls = driver.find_elements_by_tag_name('a')
linkedin_urls = [url.get_attribute('href') for url in linkedin_urls]
linkedin_urls[31]
sleep(0.5)
# use the first - best fit - search
driver.get(linkedin_urls[31])
# add a 5 second pause loading each URL
sleep(5)
# assigning the source code for the webpage to variable sel
sel = Selector(text=driver.page_source)
# xpath to extract the text from the class containing the name
name = sel.xpath('//*[starts-with(@class, "inline t-24 t-black t-normal break-words")]/text()').extract_first()
if name:
name = name.strip()
# xpath to extract the text from the class containing the company name
company = sel.xpath('//*[starts-with(@class, "pv-entity__secondary-title t-14 t-black t-normal")]/text()').extract_first()
if company:
company = company.strip()
# xpath to extract the text from the class containing the title
title = sel.xpath('//*[starts-with(@class, "t-16 t-black t-bold")]/text()').extract_first()
if title:
title = title.strip()
# xpath to extract the text from the class containing the time
time = sel.xpath('//*[starts-with(@class, "pv-entity__date-range t-14 t-black--light t-normal")]/span[2]').extract_first()
if time:
time = time.strip("<span>")
time = time.strip("</")
#url
linkedin_url = driver.current_url
# use the first - best fit - search
driver.get(linkedin_urls[31])
# add a 5 second pause loading each URL
sleep(5)
# assigning the source code for the webpage to variable sel
sel = Selector(text=driver.page_source)
#output to csv
writer.writerow([name,
company,
title,
time,
linkedin_url])
print ("----------------------------------")
print (name)
print (company)
print (title)
print (time)
print (linkedin_url)
###Output
----------------------------------
None
None
None
None
https://www.google.com/search?q=site:linkedin.com/in/+Fu,+Junfeng+%22Business+Analytics%22+%22Dallas%22&tbm=isch&source=iu&ictx=1&fir=6arNEmMICLhOCM%252C0Y7LmlZCNil7fM%252C_&vet=1&usg=AI4_-kSY5EQLv0HV5qajKq0aL647XvEEYw&sa=X&ved=2ahUKEwjE7_zd8OLvAhUx8bsIHZjfC2cQ9QF6BAgJEAE#imgrc=6arNEmMICLhOCM
|
.ipynb_checkpoints/Climate_Change_Analysis_07Jan2020_updated-checkpoint.ipynb | ###Markdown
Climate Change and Deaths from Cancer Analysis In this analysis, we would like to see the correlation between climate change and deaths from Cancer, specifically at the top 5 countries with the highest population in the world.1. China2. India3. United States4. Indonesia5. Brazil
###Code
%matplotlib inline
# Dependencies and Set Up
import pandas as pd
import numpy as np
import requests
import json
import matplotlib.pyplot as plt
from scipy import stats
# Read csv for temperature by countries from 1991 to 2016
temp_china = pd.read_csv("./Resources/temperature_1991_2016_China.csv")
temp_india = pd.read_csv("./Resources/temperature_1991_2016_India.csv")
temp_usa = pd.read_csv("./Resources/temperature_1991_2016_USA.csv")
temp_indonesia = pd.read_csv("./Resources/temperature_1991_2016_Indonesia.csv")
temp_brazil = pd.read_csv("./Resources/temperature_1991_2016_Brazil.csv")
# Check and print the temperature data (China)
temp_china.head()
# Grouping the DataFrame by year
temp_china_by_year = temp_china.groupby(["Year"])
# Calculate the average temperature by year and print in DataFrame
temp_china_by_year_mean = pd.DataFrame(temp_china_by_year["Temperature - (Celsius)"].mean())
temp_china_by_year_mean.head()
# Perform a linear regression on the temperature year by year
year = temp_china_by_year_mean.index
temp = temp_china_by_year_mean["Temperature - (Celsius)"]
(slope, intercept, r_value, p_value, std_err) = stats.linregress(year, temp)
# Get regression values
regress_values = year * slope + intercept
# print(regress_values)
# Create plot for temperature in China from 1991 to 2016 with the line regression
plt.plot(temp_china_by_year_mean.index, temp_china_by_year_mean["Temperature - (Celsius)"],
color="green")
plt.plot(year, regress_values, color="red")
plt.title("Temperature (C) in China from 1991 to 2016")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.show()
# Check and print the temperature data (India)
temp_india.head()
# Grouping the DataFrame by year
temp_india_by_year = temp_india.groupby(["Year"])
# Calculate the average temperature by year and print in DataFrame
temp_india_by_year_mean = pd.DataFrame(temp_india_by_year["Temperature - (Celsius)"].mean())
temp_india_by_year_mean.head()
# Perform a linear regression on the temperature year by year
year = temp_india_by_year_mean.index
temp = temp_india_by_year_mean["Temperature - (Celsius)"]
(slope, intercept, r_value, p_value, std_err) = stats.linregress(year, temp)
# Get regression values
regress_values = year * slope + intercept
# print(regress_values)
# Create plot for temperature in China from 1991 to 2016 with the line regression
plt.plot(temp_india_by_year_mean.index, temp_india_by_year_mean["Temperature - (Celsius)"],
color="orange")
plt.plot(year, regress_values, color="blue")
plt.title("Temperature (C) in India from 1991 to 2016")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.show()
# Check and print the temperature data (USA)
temp_usa.head()
# Grouping the DataFrame by year
temp_usa_by_year = temp_usa.groupby(["Year"])
# Calculate the average temperature by year and print in DataFrame
temp_usa_by_year_mean = pd.DataFrame(temp_usa_by_year["Temperature - (Celsius)"].mean())
temp_usa_by_year_mean.head()
# Perform a linear regression on the temperature year by year
year = temp_usa_by_year_mean.index
temp = temp_usa_by_year_mean["Temperature - (Celsius)"]
(slope, intercept, r_value, p_value, std_err) = stats.linregress(year, temp)
# Get regression values
regress_values = year * slope + intercept
# print(regress_values)
# Create plot for temperature in China from 1991 to 2016 with the line regression
plt.plot(temp_usa_by_year_mean.index, temp_usa_by_year_mean["Temperature - (Celsius)"],
color="orange")
plt.plot(year, regress_values, color="blue")
plt.title("Temperature (C) in United States from 1991 to 2016")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.show()
# Check and print the temperature data (Indonesia)
temp_indonesia.head()
# Grouping the DataFrame by year
temp_indonesia_by_year = temp_indonesia.groupby(["Year"])
# Calculate the average temperature by year and print in DataFrame
temp_indonesia_by_year_mean = pd.DataFrame(temp_indonesia_by_year["Temperature - (Celsius)"].mean())
temp_indonesia_by_year_mean.head()
# Perform a linear regression on the temperature year by year
year = temp_indonesia_by_year_mean.index
temp = temp_indonesia_by_year_mean["Temperature - (Celsius)"]
(slope, intercept, r_value, p_value, std_err) = stats.linregress(year, temp)
# Get regression values
regress_values = year * slope + intercept
# print(regress_values)
# Create plot for temperature in China from 1991 to 2016 with the line regression
plt.plot(temp_indonesia_by_year_mean.index, temp_indonesia_by_year_mean["Temperature - (Celsius)"],
color="orange")
plt.plot(year, regress_values, color="blue")
plt.title("Temperature (C) in Indonesia from 1991 to 2016")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.show()
# Check and print the temperature data (Brazil)
temp_brazil.head()
# Grouping the DataFrame by year
temp_brazil_by_year = temp_brazil.groupby(["Year"])
# Calculate the average temperature by year and print in DataFrame
temp_brazil_by_year_mean = pd.DataFrame(temp_brazil_by_year["Temperature - (Celsius)"].mean())
temp_brazil_by_year_mean.head()
# Perform a linear regression on the temperature year by year
year = temp_brazil_by_year_mean.index
temp = temp_brazil_by_year_mean["Temperature - (Celsius)"]
(slope, intercept, r_value, p_value, std_err) = stats.linregress(year, temp)
# Get regression values
regress_values = year * slope + intercept
# print(regress_values)
# Create plot for temperature in China from 1991 to 2016 with the line regression
plt.plot(temp_brazil_by_year_mean.index, temp_brazil_by_year_mean["Temperature - (Celsius)"],
color="orange")
plt.plot(year, regress_values, color="blue")
plt.title("Temperature (C) in Brazil from 1991 to 2016")
plt.xlabel("Year")
plt.ylabel("Temperature (C)")
plt.show()
# Read the csv for the annual CO2 emission by country
CO2_emission = pd.read_csv("./Resources/annual_co2_emissions_by_region.csv")
CO2_emission.head()
# Rename the column name
CO2_emission = CO2_emission.rename(
columns = {"Entity": "Country", "Annual CO2 emissions (tonnes )": "CO2 emissions (tonnes)"})
CO2_emission.head()
# Extract only China data
columns = ["Country", "Year", "CO2 emissions (tonnes)"]
CO2_emission_china = CO2_emission.loc[(CO2_emission["Country"] == "China"), columns]
CO2_emission_china.head()
# Extract China data for 1991 to 2016 only
CO2_emission_china = CO2_emission_china.set_index("Year")
years = np.arange(1991, 2017, 1)
years_91_16 = []
for year in years:
years_91_16.append(year)
# years_91_16
CO2_emission_china = CO2_emission_china.loc[years_91_16]
CO2_emission_china.head(10)
# Extract only India data
columns = ["Country", "Year", "CO2 emissions (tonnes)"]
CO2_emission_india = CO2_emission.loc[(CO2_emission["Country"] == "India"), columns]
CO2_emission_india.head()
# Extract India data for 1991 to 2016 only
CO2_emission_india = CO2_emission_india.set_index("Year")
CO2_emission_india = CO2_emission_india.loc[years_91_16]
CO2_emission_india.head(10)
# Extract only United States data
columns = ["Country", "Year", "CO2 emissions (tonnes)"]
CO2_emission_usa = CO2_emission.loc[(CO2_emission["Country"] == "United States"), columns]
CO2_emission_usa.head()
# Extract United States data for 1991 to 2016 only
CO2_emission_usa = CO2_emission_usa.set_index("Year")
CO2_emission_usa = CO2_emission_usa.loc[years_91_16]
CO2_emission_usa
# Extract only Indonesia data
columns = ["Country", "Year", "CO2 emissions (tonnes)"]
CO2_emission_indonesia = CO2_emission.loc[(CO2_emission["Country"] == "Indonesia"), columns]
CO2_emission_indonesia.head()
# Extract Indonesia data for 1991 to 2016 only
CO2_emission_indonesia = CO2_emission_indonesia.set_index("Year")
CO2_emission_indonesia = CO2_emission_indonesia.loc[years_91_16]
CO2_emission_indonesia.head(15)
# Extract only Brazil data
columns = ["Country", "Year", "CO2 emissions (tonnes)"]
CO2_emission_brazil = CO2_emission.loc[(CO2_emission["Country"] == "Brazil"), columns]
CO2_emission_brazil.head()
# Extract Brazil data for 1991 to 2016 only
CO2_emission_brazil = CO2_emission_brazil.set_index("Year")
CO2_emission_brazil = CO2_emission_brazil.loc[years_91_16]
CO2_emission_brazil.head(15)
# Read the csv for total cancer deaths by cancer types
cancer_deaths = pd.read_csv("./Resources/total_cancer_deaths_by_type.csv")
cancer_deaths.head()
# Seeing the list of column names
list(cancer_deaths.columns)
# Extracting the columns for Country/Entity, Year, and deaths because of lung cancer
lung_cancer_deaths = cancer_deaths.loc[:, ["Entity", "Year", "Tracheal, bronchus, and lung cancer (deaths)"]]
lung_cancer_deaths.head()
# Rename the column name
lung_cancer_deaths = lung_cancer_deaths.rename(columns = {"Entity": "Country"})
lung_cancer_deaths.head()
# Extract the deaths caused by lung cancer for China only
lung_cancer_deaths_china = lung_cancer_deaths.loc[lung_cancer_deaths["Country"] == "China"]
# Set index as year and extract the deaths caused by lung cancer in China for year 1991 to 2016 only
lung_cancer_deaths_china = lung_cancer_deaths_china.set_index("Year")
lung_cancer_deaths_china = lung_cancer_deaths_china.loc[years_91_16]
lung_cancer_deaths_china.head(15)
# Extract the deaths caused by lung cancer for India only
lung_cancer_deaths_india = lung_cancer_deaths.loc[lung_cancer_deaths["Country"] == "India"]
# Set index as year and extract the deaths caused by lung cancer in India for year 1991 to 2016 only
lung_cancer_deaths_india = lung_cancer_deaths_india.set_index("Year")
lung_cancer_deaths_india = lung_cancer_deaths_india.loc[years_91_16]
lung_cancer_deaths_india.head(15)
# Extract the deaths caused by lung cancer for United States only
lung_cancer_deaths_usa = lung_cancer_deaths.loc[lung_cancer_deaths["Country"] == "United States"]
# Set index as year and extract the deaths caused by lung cancer in United States for year 1991 to 2016 only
lung_cancer_deaths_usa = lung_cancer_deaths_usa.set_index("Year")
lung_cancer_deaths_usa = lung_cancer_deaths_usa.loc[years_91_16]
lung_cancer_deaths_usa.head(15)
# Extract the deaths caused by lung cancer for Indonesia only
lung_cancer_deaths_indonesia = lung_cancer_deaths.loc[lung_cancer_deaths["Country"] == "Indonesia"]
# Set index as year and extract the deaths caused by lung cancer in Indonesia for year 1991 to 2016 only
lung_cancer_deaths_indonesia = lung_cancer_deaths_indonesia.set_index("Year")
lung_cancer_deaths_indonesia = lung_cancer_deaths_indonesia.loc[years_91_16]
lung_cancer_deaths_indonesia.head(15)
# Extract the deaths caused by lung cancer for Brazil only
lung_cancer_deaths_brazil = lung_cancer_deaths.loc[lung_cancer_deaths["Country"] == "Brazil"]
# Set index as year and extract the deaths caused by lung cancer in Brazil for year 1991 to 2016 only
lung_cancer_deaths_brazil = lung_cancer_deaths_brazil.set_index("Year")
lung_cancer_deaths_brazil = lung_cancer_deaths_brazil.loc[years_91_16]
lung_cancer_deaths_brazil.head(15)
# Read the csv for total population by region
pop = pd.read_csv("./Resources/total_population_by_region.csv")
pop.head()
# Extract the data for China and year from 1991 to 2016 only
pop_91_16 = pop.loc[:,["Country Name",
"1991", "1992", "1993", "1994", "1995",
"1996", "1997", "1998", "1999", "2000",
"2001", "2002", "2003", "2004", "2005",
"2006", "2007", "2008", "2009", "2010",
"2011", "2012", "2013", "2014", "2015", "2016"]]
# Set index as Country
pop_91_16 = pop_91_16.set_index("Country Name")
# Transpose the columns and rows
pop_91_16 = pd.DataFrame.transpose(pop_91_16)
pop_91_16.head()
pop_91_16 = pop_91_16.rename_axis("Year", axis=1)
pop_91_16.head()
# Extract the population data for China only and rename the column name to "Population"
pop_china = pop_91_16.loc[:, ["China"]]
pop_china = pop_china.rename(columns = {"China": "Population"})
pop_china.index = pop_china.index.astype("int64")
pop_china.head()
# Extract the population data for India only and rename the column name to "Population"
pop_india = pop_91_16.loc[:, ["India"]]
pop_india = pop_india.rename(columns = {"India": "Population"})
pop_india.index = pop_india.index.astype("int64")
pop_india.head()
# Extract the population data for United States only and rename the column name to "Population"
pop_usa = pop_91_16.loc[:, ["United States"]]
pop_usa = pop_usa.rename(columns = {"United States": "Population"})
pop_usa.index = pop_usa.index.astype("int64")
pop_usa.head()
# Extract the population data for Indonesia only and rename the column name to "Population"
pop_indonesia = pop_91_16.loc[:, ["Indonesia"]]
pop_indonesia = pop_indonesia.rename(columns = {"Indonesia": "Population"})
pop_indonesia.index = pop_indonesia.index.astype("int64")
pop_indonesia.head()
# Extract the population data for Brazil only and rename the column name to "Population"
pop_brazil = pop_91_16.loc[:, ["Brazil"]]
pop_brazil = pop_brazil.rename(columns = {"Brazil": "Population"})
pop_brazil.index = pop_brazil.index.astype("int64")
pop_brazil.head()
lung_cancer_deaths_china.head()
lung_cancer_deaths_china = lung_cancer_deaths_china.rename_axis(index=None, columns="Year")
lung_cancer_deaths_china.head()
# Merge population data with the total deaths caused by lung cancer deaths, to get the percentage of people
# that were died because of lung cancer in each country
lung_cancer_deaths_total_pop_china = pop_china.merge(lung_cancer_deaths_china, how="outer",
left_index=True, right_index=True)
# Create an extra column to store lung cancer percentage
pct_lung_cancer_deaths = \
(lung_cancer_deaths_total_pop_china["Tracheal, bronchus, and lung cancer (deaths)"] / \
lung_cancer_deaths_total_pop_china["Population"]) * 100
lung_cancer_deaths_total_pop_china["Tracheal, bronchus, and lung cancer (%)"] = pct_lung_cancer_deaths.map("{:.5f}%".format)
lung_cancer_deaths_total_pop_china
# # Plot the graph based on CO2 emission data for China
# plt.plot(CO2_emission_china.index, CO2_emission_china["CO2 emissions (tonnes)"],
# color="red", marker="o", markersize=5, linewidth=0.5)
# plt.show()
# # Plot the graph based on lung cancer deaths data for China
# plt.plot(lung_cancer_deaths_total_pop_china.index, lung_cancer_deaths_total_pop_china["Tracheal, bronchus, and lung cancer (%)"],
# color="blue", marker="o", markersize=5, linewidth=0.5)
# plt.show()
# Plot both CO2 emission and lung cancer deaths data for China in one graph
years = np.arange(1991, 2017, 1)
years_label = []
for year in years:
years_label.append(year)
fig, ax1 = plt.subplots(figsize=(12,12))
ax1.plot(years, CO2_emission_china["CO2 emissions (tonnes)"],
color="red", linewidth=1)
ax1.set_xlabel("Year")
ax1.set_ylabel("CO2 Emissions in China (Tonnes)", color="red")
ax1.set_xticks(years_label)
ax1.set_xticklabels(years_label, rotation=45)
ax2 = ax1.twinx()
ax2.plot(years, lung_cancer_deaths_total_pop_china["Tracheal, bronchus, and lung cancer (%)"],
color="blue", linewidth=1)
ax2.set_ylabel("Tracheal, Bronchus, and Lung Cancer Deaths (%)", color="blue")
# fig.tight_layout()
plt.title("CO2 Emissions and Deaths Caused by Tracheal, Bronchus, Lung Cancer in China from 1991 to 2016")
plt.show()
# Plot both CO2 emission and lung cancer deaths data for India in one graph
years = np.arange(1991, 2017, 1)
years_label = []
for year in years:
years_label.append(year)
fig, ax1 = plt.subplots(figsize=(10,10))
ax1.plot(years, CO2_emission_india["CO2 emissions (tonnes)"],
color="red", linewidth=1)
ax1.set_xlabel("Year")
ax1.set_ylabel("CO2 Emissions in India (Tonnes)", color="red")
ax1.set_xticks(years_label)
ax1.set_xticklabels(years_label, rotation=45)
ax2 = ax1.twinx()
ax2.plot(years, lung_cancer_deaths_india["Tracheal, bronchus, and lung cancer (deaths)"],
color="blue", linewidth=1)
ax2.set_ylabel("Tracheal, bronchus, and lung cancer (deaths)", color="blue")
# fig.tight_layout()
plt.title("CO2 Emissions and Deaths Caused by Tracheal, Bronchus, Lung Cancer in India from 1991 to 2016")
plt.show()
# Plot both CO2 emission and lung cancer deaths data for India in one graph
years = np.arange(1991, 2017, 1)
years_label = []
for year in years:
years_label.append(year)
fig, ax1 = plt.subplots(figsize=(10,10))
ax1.plot(years, CO2_emission_usa["CO2 emissions (tonnes)"],
color="red", linewidth=1)
ax1.set_xlabel("Year")
ax1.set_ylabel("CO2 Emissions in United States (Tonnes)", color="red")
ax1.set_xticks(years_label)
ax1.set_xticklabels(years_label, rotation=45)
ax2 = ax1.twinx()
ax2.plot(years, lung_cancer_deaths_usa["Tracheal, bronchus, and lung cancer (deaths)"],
color="blue", linewidth=1)
ax2.set_ylabel("Tracheal, bronchus, and lung cancer (deaths)", color="blue")
# fig.tight_layout()
plt.title("CO2 Emissions and Deaths Caused by Tracheal, Bronchus, Lung Cancer in United States from 1991 to 2016")
plt.show()
# Plot both CO2 emission and lung cancer deaths data for Indonesia in one graph
years = np.arange(1991, 2017, 1)
years_label = []
for year in years:
years_label.append(year)
fig, ax1 = plt.subplots(figsize=(10,10))
ax1.plot(years, CO2_emission_indonesia["CO2 emissions (tonnes)"],
color="red", linewidth=1)
ax1.set_xlabel("Year")
ax1.set_ylabel("CO2 Emissions in Indonesia (Tonnes)", color="red")
ax1.set_xticks(years_label)
ax1.set_xticklabels(years_label, rotation=45)
ax2 = ax1.twinx()
ax2.plot(years, lung_cancer_deaths_indonesia["Tracheal, bronchus, and lung cancer (deaths)"],
color="blue", linewidth=1)
ax2.set_ylabel("Tracheal, bronchus, and lung cancer (deaths)", color="blue")
# fig.tight_layout()
plt.title("CO2 Emissions and Deaths Caused by Tracheal, Bronchus, Lung Cancer in Indonesia from 1991 to 2016")
plt.show()
# Plot both CO2 emission and lung cancer deaths data for Brazil in one graph
years = np.arange(1991, 2017, 1)
years_label = []
for year in years:
years_label.append(year)
fig, ax1 = plt.subplots(figsize=(10,10))
ax1.plot(years, CO2_emission_brazil["CO2 emissions (tonnes)"],
color="red", linewidth=1)
ax1.set_xlabel("Year")
ax1.set_ylabel("CO2 Emissions in Brazil (Tonnes)", color="red")
ax1.set_xticks(years_label)
ax1.set_xticklabels(years_label, rotation=45)
ax2 = ax1.twinx()
ax2.plot(years, lung_cancer_deaths_brazil["Tracheal, bronchus, and lung cancer (deaths)"],
color="blue", linewidth=1)
ax2.set_ylabel("Tracheal, bronchus, and lung cancer (deaths)", color="blue")
# fig.tight_layout()
plt.title("CO2 Emissions and Deaths Caused by Tracheal, Bronchus, Lung Cancer in Brazil from 1991 to 2016")
plt.show()
###Output
_____no_output_____ |
customer-satisfaction-using-xgboost.ipynb | ###Markdown
> Data Preprocessing
###Code
train_df = pd.read_csv("../input/santander-customer-satisfaction/train.csv",encoding='latin-1')
print('dataset shape:', train_df.shape)
train_df.head(3)
test_df = pd.read_csv("../input/santander-customer-satisfaction/test.csv",encoding='latin-1')
print('dataset shape:', test_df.shape)
test_df.head(3)
train_df.info()
print(train_df['TARGET'].value_counts())
unsatisfied_cnt = train_df[train_df['TARGET'] == 1]['TARGET'].count()
total_cnt = train_df['TARGET'].count()
print('unsatisfied Ratio {0:.2f}'.format((unsatisfied_cnt / total_cnt)))
train_df.describe( )
print(train_df['var3'].value_counts( )[:10])
print(test_df['var3'].value_counts( )[:10])
###Output
2 73962
-999999 120
8 116
9 108
13 107
3 107
1 99
10 85
11 85
12 83
Name: var3, dtype: int64
###Markdown
> -999999 is a NaN -> Should replace or Drop it
###Code
# var3 value replace -999999 to 2, Drop ID feature
train_df['var3'].replace(-999999, 2, inplace=True)
train_df.drop('ID',axis=1 , inplace=True)
test_df['var3'].replace(-999999, 2, inplace=True)
test_df.drop('ID',axis=1 , inplace=True)
# Split feature, lable.
X_features = train_df.iloc[:, :-1]
y_labels = train_df.iloc[:, -1]
print('Feature data shape:{0}'.format(X_features.shape))
X_test = test_df
from sklearn.model_selection import train_test_split
# Split train, validation set
X_train, X_val, y_train, y_val = train_test_split(X_features, y_labels,
test_size=0.2, random_state=0)
train_cnt = y_train.count()
val_cnt = y_val.count()
print('train set Shape:{0}, val set Shape:{1}'.format(X_train.shape , X_val.shape))
print(' ratio of train set label')
print(y_train.value_counts()/train_cnt)
print('\n ratio of validation set label')
print(y_val.value_counts()/val_cnt)
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score
# n_estimators: 500
xgb_clf = XGBClassifier(n_estimators=500, random_state=156)
# evaluation: auc, early_stopping_roubnds: 100.
xgb_clf.fit(X_train, y_train, early_stopping_rounds=100,
eval_metric="auc", eval_set=[(X_train, y_train), (X_val, y_val)])
xgb_roc_score = roc_auc_score(y_val, xgb_clf.predict_proba(X_val)[:,1],average='macro')
print('ROC AUC: {0:.4f}'.format(xgb_roc_score))
# n_estimators: 1000, learning_rate=0.02, reg_alpha=0.03.
xgb_clf = XGBClassifier(n_estimators=1000, random_state=156, learning_rate=0.02, max_depth=7,\
min_child_weight=1, colsample_bytree=0.75, reg_alpha=0.03)
# evaluation metric: auc, early stopping: 200
xgb_clf.fit(X_train, y_train, early_stopping_rounds=200,
eval_metric="auc",eval_set=[(X_train, y_train), (X_val, y_val)])
xgb_roc_score = roc_auc_score(y_val, xgb_clf.predict_proba(X_val)[:,1],average='macro')
print('ROC AUC: {0:.4f}'.format(xgb_roc_score))
from xgboost import plot_importance
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1,1,figsize=(10,8))
plot_importance(xgb_clf, ax=ax , max_num_features=20,height=0.4)
submission = pd.read_csv('../input/santander-customer-satisfaction/sample_submission.csv')
submission.head()
#finals_pred = xgb_clf.predict(X_test)
#finals_pred
target = xgb_clf.predict(X_test)
submission['TARGET'] = target
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
calculations/model-sediment-resuspension-rate.ipynb | ###Markdown
What is the range of sediment resuspension rates in our model?
###Code
import numpy as np
import netCDF4 as nc
###Output
_____no_output_____
###Markdown
Parameters
###Code
imin, imax = 1479, 2179
jmin, jmax = 159, 799
isize, jsize = imax-imin, jmax-jmin
###Output
_____no_output_____
###Markdown
Load files
###Code
mask = nc.Dataset('/ocean/brogalla/GEOTRACES/data/ANHA12/ANHA12_mesh1.nc')
tmask = np.array(mask.variables['tmask'])
e1t_base = np.array(mask.variables['e1t'])[0,imin:imax,jmin:jmax]
e2t_base = np.array(mask.variables['e2t'])[0,imin:imax,jmin:jmax]
e3t = np.array(mask.variables['e3t_0'])[0,:,imin:imax,jmin:jmax]
ds = nc.Dataset('/ocean/brogalla/GEOTRACES/data/erosion_rate-20211004.nc')
erosion_rate = np.array(ds.variables['er_rate'])[imin:imax,jmin:jmax]
###Output
_____no_output_____
###Markdown
Calculations
###Code
erosion_rate_m = np.ma.masked_where((tmask[0,0,imin:imax,jmin:jmax] < 0.1), 0.75*erosion_rate)
# 0.75 comes from resus_cst parameter in namelist_mn.constants
print('Everywhere in domain ------')
print(f'Maximum sediment resuspension rate: {np.ma.amax(erosion_rate_m)*1e3*3600*24*365:.2f} g/m2/year')
print(f'Minimum sediment resuspension rate: {np.ma.amin(erosion_rate_m)*1e3*3600*24*365:.2f} g/m2/year')
print(f'Average sediment resuspension rate: {np.ma.mean(erosion_rate_m)*1e3*3600*24*365:.2f} g/m2/year')
###Output
Everywhere in domain ------
Maximum sediment resuspension rate: 2808.02 g/m2/year
Minimum sediment resuspension rate: 0.00 g/m2/year
Average sediment resuspension rate: 94.97 g/m2/year
|
chapters/machine_learning/notebooks/decision_trees.ipynb | ###Markdown
Decision Trees
###Code
from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
###Output
_____no_output_____
###Markdown
A decision tree is the easiest classifer to understand, interpret, and explain.A decision tree is constructed by recursively splitting the data set into asequence of subsets based on if-then questions. The training set consists ofpairs $(\mathbf{x},y)$ where $\mathbf{x}\in \mathbb{R}^d$ where $d$ is thenumber of features available and where $y$ is the corresponding label. Thelearning method splits the training set into groups based on $\mathbf{x}$ whileattempting to keep the assignments in each group as uniform as possible. Inorder todo this, the learning method must pick a feature and an associated threshold forthat feature upon which todivide the data. This is tricky to explain in words, but easy to see withan example. First, let's set up the Scikit-learn classifer,
###Code
%matplotlib inline
from matplotlib.pylab import subplots
from numpy import ma
import numpy as np
np.random.seed(12345678)
from sklearn import tree
clf = tree.DecisionTreeClassifier()
###Output
_____no_output_____
###Markdown
Let's also create some example data,
###Code
import numpy as np
M=np.fromfunction(lambda i,j:j>=2,(4,4)).astype(int)
print(M)
###Output
[[0 0 1 1]
[0 0 1 1]
[0 0 1 1]
[0 0 1 1]]
###Markdown
**Programming Tip.**The `fromfunction` creates Numpy arrays using the indicies as inputsto a function whose value is the corresponding array entry.We want to classify the elements of the matrix based on theirrespective positions in the matrix. By just looking at the matrix, theclassification is pretty simple --- classify as `0` for any positions in thefirst two columns of the matrix, and classify `1` otherwise. Let's walkthrough this formally and see if this solution emerges from the decisiontree. The values of the array are the labels for the training set and theindicies of those values are the elements of $\mathbf{x}$. Specifically, thetraining set has $\mathcal{X} = \left\{(i,j)\right\}$ and$\mathcal{Y}=\left\{0,1\right\}$ Now, let's extract those elements and constructthe training set.
###Code
i,j = np.where(M==0)
x=np.vstack([i,j]).T # build nsamp by nfeatures
y = j.reshape(-1,1)*0 # 0 elements
print(x)
print(y)
###Output
[[0 0]
[0 1]
[1 0]
[1 1]
[2 0]
[2 1]
[3 0]
[3 1]]
[[0]
[0]
[0]
[0]
[0]
[0]
[0]
[0]]
###Markdown
Thus, the elements of `x` are the two-dimensional indicies of thevalues of `y`. For example, `M[x[0,0],x[0,1]]=y[0,0]`. Likewise,to complete the training set, we just need to stack the rest of the data tocoverall the cases,
###Code
i,j = np.where(M==1)
x=np.vstack([np.vstack([i,j]).T,x ]) # build nsamp x nfeatures
y=np.vstack([j.reshape(-1,1)*0+1,y]) # 1 elements
###Output
_____no_output_____
###Markdown
With all that established, all we have to do is train the classifer,
###Code
clf.fit(x,y)
###Output
_____no_output_____
###Markdown
To evaluate how the classifer performed, we can report the score,
###Code
clf.score(x,y)
###Output
_____no_output_____
###Markdown
For this classifer, the *score* is the accuracy, which isdefined as the ratio of the sum of the true-positive ($TP$) andtrue-negatives ($TN$) divided by the sum of all the terms, includingthe false terms,$$\texttt{accuracy}=\frac{TP+TN}{TP+TN+FN+FP}$$In this case, the classifier gets every point correctly, so$FN=FP=0$. On a related note, two other common names from information retrievaltheory are *recall* (a.k.a. sensitivity) and *precision* (a.k.a. positivepredictive value, $TP/(TP+FP)$). We can visualize this tree in[Figure](fig:example_tree_001). The Gini coefficients (a.k.a. categoricalvariance)in the figure are a measure of the purity of each so-determined class. Thiscoefficient is defined as,<!-- dom:FIGURE: [fig-machine_learning/example_tree_001.png, width=500 frac=0.5]Example decision tree. The `Gini` coefficient in each branch measures the purityof the partition in each node. The `samples` item in the box shows the numberof items in the corresponding node in the decision tree. <divid="fig:example_tree_001"> -->Example decision tree. The Gini coefficient in each branchmeasures the purity of the partition in each node. The samplesitem in the box shows the number of items in the corresponding node in thedecision tree.$$\texttt{Gini}_m = \sum_k p_{m,k}(1-p_{m,k})$$where$$p_{m,k} = \frac{1}{N_m} \sum_{x_i\in R_m} I(y_i=k)$$which is the proportion of observations labeled $k$ in the $m^{th}$node and $I(\cdot)$ is the usual indicator function. Note that the maximum valueofthe Gini coefficient is $\max{\texttt{Gini}_{m}}=1-1/m$. For our simpleexample, half of the sixteen samples are in category `0` and the other half arein the `1` category. Using the notation above, the top box corresponds to the$0^{th}$ node, so $p_{0,0} =1/2 = p_{0,1}$. Then, $\texttt{Gini}_0=0.5$. Thenext layer of nodes in [Figure](fig:example_tree_001) is determined bywhether or not the second dimension of the $\mathbf{x}$ data is greater than`1.5`. The Gini coefficients for each of these child nodes is zero becauseafter the prior split, each subsequent category is pure. The `value` list ineach of the nodes shows the distribution of elements in each category at eachnode.To make this example more interesting, we can contaminate the data slightly,
###Code
M[1,0]=1 # put in different class
print(M) # now contaminated
###Output
[[0 0 1 1]
[1 0 1 1]
[0 0 1 1]
[0 0 1 1]]
###Markdown
Now we have a `1` entry in the previouslypure first column's second row.
###Code
i,j = np.where(M==0)
x=np.vstack([i,j]).T
y = j.reshape(-1,1)*0
i,j = np.where(M==1)
x=np.vstack([np.vstack([i,j]).T,x])
y = np.vstack([j.reshape(-1,1)*0+1,y])
clf.fit(x,y)
###Output
_____no_output_____
###Markdown
The result is shownin [Figure](fig:example_tree_002). Note the tree has grownsignificantly due to this one change! The $0^{th}$ node has thefollowing parameters, $p_{0,0} =7/16$ and $p_{0,1}=9/16$. This makesthe Gini coefficient for the $0^{th}$ node equal to$\frac{7}{16}\left(1-\frac{7}{16}\right)+\frac{9}{16}(1-\frac{9}{16})= 0.492$. As before, the root node splits on $X[1] \leq 1.5$. Let'ssee if we can reconstruct the succeeding layer of nodes manually, asin the following,
###Code
y[x[:,1]>1.5] # first node on the right
###Output
_____no_output_____
###Markdown
This obviously has a zero Gini coefficient. Likewise, the node on theleft contains the following,
###Code
y[x[:,1]<=1.5] # first node on the left
###Output
_____no_output_____
###Markdown
The Gini coefficient in this case is computed as`(1/8)*(1-1/8)+(7/8)*(1-7/8)=0.21875`. This node splits based on `X[1]<0.5`.The child node to the right derives from the following equivalent logic,
###Code
np.logical_and(x[:,1]<=1.5,x[:,1]>0.5)
###Output
_____no_output_____
###Markdown
with corresponding classes,
###Code
y[np.logical_and(x[:,1]<=1.5,x[:,1]>0.5)]
###Output
_____no_output_____
###Markdown
**Programming Tip.**The `logical_and` in Numpy provides element-wise logical conjuction. It is notpossible to accomplish this with something like `0.5< x[:,1] <=1.5` becauseof the way Python parses this syntax.<!-- dom:FIGURE: [fig-machine_learning/example_tree_002.png, width=500frac=0.65] Decision tree for contaminated data. Note that just one change in thetraining data caused the tree to grow five times as large as before! <divid="fig:example_tree_002"> -->Decision tree for contaminated data. Note that just one change in thetraining data caused the tree to grow five times as large as before!Notice that for this example as well as for the previous one, the decision treewas exactly able memorize (overfit) the data with perfect accuracy. From ourdiscussion of machine learning theory, this is an indication potential problemsin generalization.The key step in building the decision tree is to come up with theinitial split. There are the number of algorithms that can builddecision trees based on different criteria, but the general idea is tocontrol the information *entropy* as the tree is developed. Inpractical terms, this means that the algorithms attempt to build treesthat are not excessively deep. It is well-established that this is avery hard problem to solve completely and there are many approaches toit. This is because the algorithms must make global decisions at eachnode of the tree using the local data available up to that point.
###Code
_=clf.fit(x,y)
fig,axs=subplots(2,2,sharex=True,sharey=True)
ax=axs[0,0]
ax.set_aspect(1)
_=ax.axis((-1,4,-1,4))
ax.invert_yaxis()
# same background all on axes
for ax in axs.flat:
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k')
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray')
lines={'h':[],'v':[]}
nc=0
for i,j,ax in zip(clf.tree_.feature,clf.tree_.threshold,axs.flat):
_=ax.set_title('node %d'%(nc))
nc+=1
if i==0: _=lines['h'].append(j)
elif i==1: _=lines['v'].append(j)
for l in lines['v']: _=ax.vlines(l,-1,4,lw=3)
for l in lines['h']: _=ax.hlines(l,-1,4,lw=3)
###Output
_____no_output_____
###Markdown
<!-- dom:FIGURE: [fig-machine_learning/example_tree_003.png, width=500frac=0.85] The decision tree divides the training set into regions by splittingsuccessively along each dimension until each region is as pure as possible. <divid="fig:example_tree_003"> -->The decision tree divides the training set into regions by splittingsuccessively along each dimension until each region is as pure as possible.For this example, the decision tree partitions the $\mathcal{X}$ space intodifferent regions corresponding to different $\mathcal{Y}$ labels as shown in[Figure](fig:example_tree_003). The root node at the top of[Figure](fig:example_tree_002) splits the input data based on $X[1] \leq 1.5$.Thiscorresponds to the top left panel in [Figure](fig:example_tree_003) (i.e.,`node 0`) where the vertical line divides the training data shown into tworegions, corresponding to the two subsequent child nodes. The next splithappens with $X[1] \leq 0.5$ as shown in the next panel of[Figure](fig:example_tree_003) titled `node 1`. This continues until the lastpanelon the lower right, where the contaminated element we injected has beenisolated into its own sub-region. Thus, the last panel is a representation of[Figure](fig:example_tree_002), where the horizontal/vertical linescorrespond to successive splits in the decision tree.
###Code
i,j = np.indices((5,5))
x=np.vstack([i.flatten(),j.flatten()]).T
y=(x[:,0]>=x[:,1]).astype(int).reshape((-1,1))
_=clf.fit(x,y)
fig,ax=subplots()
_=ax.axis((-1,5,-1,5))
ax.set_aspect(1)
ax.invert_yaxis()
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k',ms=15)
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray',ms=15)
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.hlines(j,-1,6,lw=3.)
else:
_=ax.vlines(j,-1,6,lw=3.)
###Output
_____no_output_____
###Markdown
<!-- dom:FIGURE: [fig-machine_learning/example_tree_004.png, width=500frac=0.75] The decision tree fitted to this triangular matrix is very complex,as shown by the number of horizontal and vertical partitions. Thus, even thoughthe pattern in the training data is visually clear, the decision tree cannotautomatically uncover it. -->The decision tree fitted to this triangular matrix is very complex, as shownby the number of horizontal and vertical partitions. Thus, even though thepattern in the training data is visually clear, the decision tree cannotautomatically uncover it.[Figure](fig:example_tree_004) shows another example, but now using a simpletriangular matrix. As shown by the number of vertical and horizontalpartitioning lines, the decision tree that corresponds to this figure is talland complex. Notice that if we apply a simple rotational transform to thetraining data, we can obtain [Figure](fig:example_tree_005), which requires atrivial decision tree to fit. Thus, there may be transformations of thetraining data that simplify the decision tree, but these are very difficult toderive in general. Nonetheless, this highlights a key weakness of decisiontrees wherein they may be easy to understand, to train, and to deploy, but maybe completely blind to such time-saving and complexity-saving transformations.Indeed, in higher dimensions, it may be impossible to even visualize thepotential of such latent transformations. Thus, the advantages of decisiontrees can be easily outmatched by other methods that we will study later that*do* have the ability to uncover useful transformations, but which willnecessarily be harder to train. Another disadvantage is that because of howdecision trees are built, even a single misplaced data point can cause the treeto grow very differently. This is a symptom of high variance.In all of our examples, the decision tree was able to memorize the trainingdata exactly, as we discussed earlier, this is a sign of potentialgeneralization errors. There are pruning algorithms that strategically removesome of the deepest nodes. but these are not yet fully implemented inScikit-learn, as of this writing. Alternatively, restricting the maximum depthof the decision tree can have a similar effect. The `DecisionTreeClassifier`and `DecisionTreeRegressor` in Scikit-learn both have keyword arguments thatspecify maximum depth. Random ForestsIt is possible to combine a set of decision trees into a largercomposite tree that has better performance than its individualcomponents by using ensemble learning. This is implemented inScikit-learn as `RandomForestClassifier`. The composite tree helpsmitigate the primary weakness of decision trees --- high variance.Random forest classifiers help by averaging out the predictions ofmany constituent trees to minimize this variance by randomly selectingsubsets of the training set to train the embedded trees. On theother hand, this randomization can increase bias because there may bea subset of the training set that yields an excellent decision tree,but the averaging effect over randomized training samples washes thisout in the same averaging that reduces the variance. This is a keytrade-off. The following code implements a simple random forestclassifer from our last example.
###Code
from numpy import sin, cos, pi
rotation_matrix=np.matrix([[cos(pi/4),-sin(pi/4)],
[sin(pi/4),cos(pi/4)]])
xr=(rotation_matrix*(x.T)).T
xr=np.array(xr)
fig,ax=subplots()
ax.set_aspect(1)
_=ax.axis(xmin=-2,xmax=7,ymin=-4,ymax=4)
_=ax.plot(ma.masked_array(xr[:,1],y==1),ma.masked_array(xr[:,0],y==1),'ow',mec='k',ms=15)
_=ax.plot(ma.masked_array(xr[:,1],y==0),ma.masked_array(xr[:,0],y==0),'o',color='gray',ms=15)
_=clf.fit(xr,y)
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.vlines(j,-1,6,lw=3.)
elif i==0:
_=ax.hlines(j,-1,6,lw=3.)
###Output
_____no_output_____
###Markdown
<!-- dom:FIGURE: [fig-machine_learning/example_tree_005.png, width=500frac=0.75] Using a simple rotation on the training data in[Figure](fig:example_tree_004), the decision tree can now easily fit thetraining data with a single partition. -->Using a simple rotation on the training data in[Figure](fig:example_tree_004), the decision tree can now easily fit thetraining data with a single partition.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split, cross_val_score
X_train,X_test,y_train,y_test=train_test_split(x,y,random_state=1)
clf = tree.DecisionTreeClassifier(max_depth=2)
_=clf.fit(X_train,y_train)
rfc = RandomForestClassifier(n_estimators=4,max_depth=2)
_=rfc.fit(X_train,y_train.flat)
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=4,max_depth=2)
rfc.fit(X_train,y_train.flat)
def draw_board(x,y,clf,ax=None):
if ax is None: fig,ax=subplots()
xm,ymn=x.min(0).T
ax.axis(xmin=xm-1,ymin=ymn-1)
xx,ymx=x.max(0).T
_=ax.axis(xmax=xx+1,ymax=ymx+1)
_=ax.set_aspect(1)
_=ax.invert_yaxis()
_=ax.plot(ma.masked_array(x[:,1],y==1),ma.masked_array(x[:,0],y==1),'ow',mec='k')
_=ax.plot(ma.masked_array(x[:,1],y==0),ma.masked_array(x[:,0],y==0),'o',color='gray')
for i,j in zip(clf.tree_.feature,clf.tree_.threshold):
if i==1:
_=ax.vlines(j,-1,6,lw=3.)
elif i==0:
_=ax.hlines(j,-1,6,lw=3.)
return ax
fig,axs = subplots(2,2)
# draw constituent decision trees
for est,ax in zip(rfc.estimators_,axs.flat):
_=draw_board(X_train,y_train,est,ax=ax)
###Output
_____no_output_____ |
Data-Analysis/pandas_seaborn_scikit-learn.ipynb | ###Markdown
Reading data using pandas
###Code
import pandas as pd
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
data.head()
data.tail()
data.shape
import seaborn as sns
%matplotlib inline
sns.pairplot(data, x_vars=['TV', 'Radio', 'Newspaper'], y_vars='Sales', size =7, aspect=0.7, kind='reg')
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
features_cols = ['TV', 'Radio', 'Newspaper']
X = data[features_cols]
X.head()
print type(X)
print X.shape
y = data['Sales']
y = data.Sales
y.head()
print type(y)
print y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
print X_test.shape
print X_train.shape
print y_test.shape
print y_train.shape
###Output
(50, 3)
(150, 3)
(50,)
(150,)
###Markdown
Linear Regression in scikit-learn
###Code
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, y_train)
print linreg.intercept_
print linreg.coef_
# pair the feature names with the coefficients
zip(features_cols, linreg.coef_)
y_pred = linreg.predict(X_test)
print y_pred
# Define true and predicted response value
true = [100, 50, 30, 20]
pred = [90, 50, 50, 30]
print (10 + 0 + 20 + 10)/4
from sklearn import metrics
print metrics.mean_absolute_error(true, pred)
###Output
10
10.0
###Markdown
- Root mean Squared Error
###Code
print (10**2 + 0 + 20**2 + 10**2)/4
print metrics.mean_squared_error(true, pred)
# Calculate RMSE by hand
import numpy as np
print np.sqrt((10**2 + 0 + 20**2 + 10**2)/4)
print np.sqrt(metrics.mean_squared_error(true, pred))
###Output
12.2474487139
12.2474487139
###Markdown
- Computing the RMSE for our Sales predictions
###Code
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
###Output
1.40465142303
|
music-customer-analysis-scala.ipynb | ###Markdown
Analyzing Customer-Music Data using Apache SparkThe original Drill & Tableau based tutorial is at https://mapr.com/blog/real-time-user-profiles-spark-drill-and-mapr-db/. I have converted them to Spark 2.4 Jupyter Scala Notebooks. In addition to that I have added many more Spark based Data Analysis sections, Side by Side Spark comparisons DF API and Spark SQL constructs to realize the same use case. Also used Jupyter Notebook for data visualization.A special section for working with RDDs is also included.Users are continuously connecting to the service and listening to tracks that they like -- this generates our main data set. The behaviors captured in these events, over time, represent the highest level of detail about actual behaviors of customers as they consume the service by listening to music. In addition to the events of listening to individual tracks, we have a few other data sets representing all the information we might normally have in such a service. In this post we will make use of the following three data sets. 1. Understanding the Data Set**Individual customers listening to individual tracks: (tracks.csv)** - a collection of events, one per line, where each event is a client listening to a track.This data is approximately 1M lines and contains simulated listener events over several months. Field Name Event ID Customer ID Track ID Datetime Mobile Listening Zip Type Integer Integer Integer String Integer Integer Example Value 9999767 2597 788 2014-12-01 09:54:09 0 11003 The event, customer and track IDs tell us what occurred (a customer listened to a certain track), while the other fields tell us some associated information, like whether the customer was listening on a mobile device and a guess about their location while they were listening. With many customers listening to many tracks, this data can get very large and will be the input into our Spark job.**Customer information:** - information about individual customers. Field Name Customer ID Name Gender Address ZIP Sign Date Status Level Campaign Linked with Apps? Type Integer String Integer String Integer String Integer Integer Integer Integer Example Value 10 Joshua Threadgill 0 10084 Easy Gate Bend 66216 01/13/2013 0 1 1 1 The fields are defined as follows:```Customer ID: a unique identifier for that customerName, gender, address, zip: the customer’s associated informationSign date: the date of addition to the serviceStatus: indicates whether or not the account is active (0 = closed, 1 = active)Level: indicates what level of service -- 0, 1, 2 for Free, Silver and Gold, respectivelyCampaign: indicates the campaign under which the user joined, defined as the following (fictional) campaigns driven by our (also fictional) marketing team:NONE - no campaign30DAYFREE - a ‘30 days free’ trial offerSUPERBOWL - a Superbowl-related programRETAILSTORE - an offer originating in brick-and-mortar retail storesWEBOFFER - an offer for web-originated customers```**Previous ad clicks: (clicks.csv)** - a collection of user click events indicating which ad was played to the user and whether or not they clicked on it. Field Name EventID CustID AdClicked Localtime Type Integer Integer String String Example Value 0 109 ADV_FREE_REFERRAL 2014-12-01 09:54:09 The fields that interest us are the foreign key identifying the customer (CustID), a string indicating which ad they clicked (AdClicked), and the time when it happened (Localtime). Note that we could use a lot more features here, such as basic information about the customer (gender, etc.), but to keep things simple for the example we’ll leave that as a future exercise.
###Code
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{SparkSession, DataFrame, Dataset, Row}
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.types.{IntegerType, StringType, StructField, StructType}
import org.apache.spark.sql.functions.{col, udf, asc, desc, when, array, struct}
import org.apache.spark.sql.functions.{sum, avg, count, countDistinct, hour, lit, format_number, explode}
import scala.collection.mutable.Set
###Output
_____no_output_____
###Markdown
2. Creating the Spark Session
###Code
val spark: SparkSession = (SparkSession
.builder
.master("local[*]")
.appName("music-customer-analysis-with-spark")
.getOrCreate())
###Output
_____no_output_____
###Markdown
3. Load the data from files into DataFrames
###Code
val MUSIC_TRACKS_DATA: String = "data/tracks.csv"
val CUSTOMER_DATA: String = "data/cust.csv"
val CLICKS_DATA: String = "data/clicks.csv"
//define the schema, corresponding to a line in the csv data file for music
val music_schema: StructType = new StructType(
Array(
new StructField("event_id", IntegerType, nullable=true),
new StructField("customer_id", IntegerType, nullable=true),
new StructField("track_id", StringType, nullable=true),
new StructField("datetime", StringType, nullable=true),
new StructField("is_mobile", IntegerType, nullable=true),
new StructField("zip", IntegerType, nullable=true)
))
//define the schema, corresponding to a line in the csv data file for customer
val cust_schema: StructType = new StructType(
Array(
new StructField("customer_id", IntegerType, nullable=true),
new StructField("name", StringType, nullable=true),
new StructField("gender", IntegerType, nullable=true),
new StructField("address", StringType, nullable=true),
new StructField("zip", IntegerType, nullable=true),
new StructField("sign_date", StringType, nullable=true),
new StructField("status", IntegerType, nullable=true),
new StructField("level", IntegerType, nullable=true),
new StructField("campaign", IntegerType, nullable=true),
new StructField("lnkd_with_apps", IntegerType, nullable=true)
))
//define the schema, corresponding to a line in the csv data file for ad click
val click_schema: StructType = StructType(
Array(
new StructField("event_id", IntegerType, nullable=true),
new StructField("customer_id", IntegerType, nullable=true),
new StructField("ad_clicked", StringType, nullable=true),
new StructField("datetime", StringType, nullable=true)
))
//Load data
val music_df: DataFrame = spark.read.schema(music_schema).csv(path=MUSIC_TRACKS_DATA).cache()
music_df.createOrReplaceTempView("music")
val cust_df: DataFrame = spark.read.schema(cust_schema).option("header", "true").csv(path=CUSTOMER_DATA).cache()
cust_df.createOrReplaceTempView("cust")
val click_df: DataFrame = spark.read.schema(click_schema).option("header", "false").csv(path=CLICKS_DATA).cache()
click_df.createOrReplaceTempView("click")
//How many music data rows
println(music_df.count())
music_df.show(5)
//How many customer data rows
println(cust_df.count())
cust_df.show(5)
//How many ads click data rows
println(click_df.count())
click_df.show(5)
###Output
+--------+-----------+--------------------+-------------------+
|event_id|customer_id| ad_clicked| datetime|
+--------+-----------+--------------------+-------------------+
| 76611| 2488| ADV_FREE_REFERRAL|2014-12-25 05:08:59|
| 305706| 2476|ADV_DONATION_CHARITY|2014-11-26 22:24:21|
| 156074| 1307| ADV_FREE_REFERRAL|2014-10-15 03:52:40|
| 192762| 1733| ADV_LIKE_FACEBOOK|2014-10-20 14:55:08|
| 76106| 2| ADV_LIKE_FACEBOOK|2014-11-19 00:22:13|
+--------+-----------+--------------------+-------------------+
only showing top 5 rows
###Markdown
4. Data Exploration 4.1 Compute Hourly Summary profile of each customer:We will now see customers' listening behaviour across various hours in the day. Whether they tend to listen more in the morning or night, statistics like that. **Add a new Hour Column to the Music data:**
###Code
var hourly_music_df: DataFrame = music_df.withColumn("hour", hour(col("datetime"))).cache()
hourly_music_df.show(5)
###Output
+--------+-----------+--------+-------------------+---------+-----+----+
|event_id|customer_id|track_id| datetime|is_mobile| zip|hour|
+--------+-----------+--------+-------------------+---------+-----+----+
| 0| 48| 453|2014-10-23 03:26:20| 0|72132| 3|
| 1| 1081| 19|2014-10-15 18:32:14| 1|17307| 18|
| 2| 532| 36|2014-12-10 15:33:16| 1|66216| 15|
| 3| 2641| 822|2014-10-20 02:24:55| 1|36690| 2|
| 4| 2251| 338|2014-11-18 07:16:05| 1|61377| 7|
+--------+-----------+--------+-------------------+---------+-----+----+
only showing top 5 rows
###Markdown
**Divide the entire day into four time buckets based on the hour:** Bucket the listen datetime into different buckets in the day e.g. night, morning, afternoon or evening and mark 1 if the song is listened in that bucket.
###Code
hourly_music_df = (hourly_music_df
.withColumn("night", when((col("hour") < 5) || (col("hour") >= 22), 1).otherwise(0))
.withColumn("morn", when((col("hour") >= 5) && (col("hour") < 12), 1).otherwise(0))
.withColumn("aft", when((col("hour") >= 12) && (col("hour") < 17), 1).otherwise(0))
.withColumn("eve", when((col("hour") >= 17) && (col("hour") < 22), 1).otherwise(0))
.cache())
###Output
_____no_output_____
###Markdown
4.1.1 Compute Customer Hourly Summary using DF API:Now we're ready to compute a summary profile for each user. We will leverage Spark SQL functions compute some high-level data:+ Average number of tracks listened during each period of the day: morning, afternoon, evening, and night. We arbitrarily define the time ranges in the code.+ Total unique tracks listened by that user, i.e. the set of unique track IDs.+ Total mobile tracks listened by that user, i.e. the count of tracks that were listened that had their mobile flag set.
###Code
val cust_profile_df: DataFrame = (hourly_music_df
.select("customer_id", "track_id", "night", "morn", "aft", "eve", "is_mobile")
.groupBy("customer_id")
.agg(countDistinct("track_id"), sum("night"),sum("morn"),sum("aft"),sum("eve"), sum("is_mobile")
)).cache()
cust_profile_df.show(10)
###Output
+-----------+------------------------+----------+---------+--------+--------+--------------+
|customer_id|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(eve)|sum(is_mobile)|
+-----------+------------------------+----------+---------+--------+--------+--------------+
| 148| 443| 149| 170| 109| 124| 476|
| 463| 306| 103| 99| 84| 76| 176|
| 1591| 171| 47| 64| 36| 40| 85|
| 2366| 143| 55| 46| 30| 25| 113|
| 4101| 100| 31| 28| 26| 22| 85|
| 1342| 173| 53| 60| 36| 42| 102|
| 2659| 119| 42| 43| 22| 22| 59|
| 1238| 191| 72| 64| 30| 46| 158|
| 4519| 103| 37| 30| 20| 20| 54|
| 1580| 162| 44| 52| 43| 41| 134|
+-----------+------------------------+----------+---------+--------+--------+--------------+
only showing top 10 rows
###Markdown
4.1.2 Compute Customer Hourly Summary using SQL:In the previous sections we used only DF APIs to calculate the hourly profiles. However, we can use pure Spark SQL to achieve the same results. That would be much less verbose. We will still leverage PySpark SQL functions compute those high-level data:+ Average number of tracks listened during each period of the day: morning, afternoon, evening, and night. We arbitrarily define the time ranges in the code.+ Total unique tracks listened by that user, i.e. the set of unique track IDs.+ Total mobile tracks listened by that user, i.e. the count of tracks that were listened that had their mobile flag set. **Divide the entire day into four time buckets based on the hour:** Bucket the listen datetime into different buckets in the day e.g. night, morning, afternoon or evening and mark 1 if the song is listened in that bucket.
###Code
spark.sql(
"""
SELECT *,
HOUR(datetime) as hour,
CASE WHEN HOUR(datetime) < 5 OR HOUR(datetime) >= 22 THEN 1 ELSE 0 END AS night,
CASE WHEN HOUR(datetime) >= 5 AND HOUR(datetime) < 12 THEN 1 ELSE 0 END AS morn,
CASE WHEN HOUR(datetime) >= 12 AND HOUR(datetime) < 17 THEN 1 ELSE 0 END AS aft,
CASE WHEN HOUR(datetime) >= 17 AND HOUR(datetime) < 22 THEN 1 ELSE 0 END AS eve
FROM music
""").show(10)
###Output
+--------+-----------+--------+-------------------+---------+-----+----+-----+----+---+---+
|event_id|customer_id|track_id| datetime|is_mobile| zip|hour|night|morn|aft|eve|
+--------+-----------+--------+-------------------+---------+-----+----+-----+----+---+---+
| 0| 48| 453|2014-10-23 03:26:20| 0|72132| 3| 1| 0| 0| 0|
| 1| 1081| 19|2014-10-15 18:32:14| 1|17307| 18| 0| 0| 0| 1|
| 2| 532| 36|2014-12-10 15:33:16| 1|66216| 15| 0| 0| 1| 0|
| 3| 2641| 822|2014-10-20 02:24:55| 1|36690| 2| 1| 0| 0| 0|
| 4| 2251| 338|2014-11-18 07:16:05| 1|61377| 7| 0| 1| 0| 0|
| 5| 1811| 6|2014-11-18 02:00:48| 1|20115| 2| 1| 0| 0| 0|
| 6| 3644| 24|2014-12-12 15:24:02| 1|15330| 15| 0| 0| 1| 0|
| 7| 250| 726|2014-10-07 09:48:53| 0|33570| 9| 0| 1| 0| 0|
| 8| 1782| 442|2014-12-30 15:27:31| 1|41240| 15| 0| 0| 1| 0|
| 9| 2932| 775|2014-11-12 07:45:55| 0|63565| 7| 0| 1| 0| 0|
+--------+-----------+--------+-------------------+---------+-----+----+-----+----+---+---+
only showing top 10 rows
###Markdown
**Compute the hourly profiles:** We can combine the above bucketing and calculating the hourly summary in one SQL as follows.
###Code
spark.sql(
"""
SELECT customer_id, COUNT(DISTINCT track_id), SUM(night), SUM(morn), SUM(aft), SUM(eve), SUM(is_mobile)
FROM(
SELECT *,
HOUR(datetime) as hour,
CASE WHEN HOUR(datetime) < 5 OR HOUR(datetime) >= 22 THEN 1 ELSE 0 END AS night,
CASE WHEN HOUR(datetime) >= 5 AND HOUR(datetime) < 12 THEN 1 ELSE 0 END AS morn,
CASE WHEN HOUR(datetime) >= 12 AND HOUR(datetime) < 17 THEN 1 ELSE 0 END AS aft,
CASE WHEN HOUR(datetime) >= 17 AND HOUR(datetime) < 22 THEN 1 ELSE 0 END AS eve
FROM music)
GROUP BY customer_id
""").show(10)
###Output
+-----------+------------------------+----------+---------+--------+--------+--------------+
|customer_id|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(eve)|sum(is_mobile)|
+-----------+------------------------+----------+---------+--------+--------+--------------+
| 148| 443| 149| 170| 109| 124| 476|
| 463| 306| 103| 99| 84| 76| 176|
| 1591| 171| 47| 64| 36| 40| 85|
| 2366| 143| 55| 46| 30| 25| 113|
| 4101| 100| 31| 28| 26| 22| 85|
| 1342| 173| 53| 60| 36| 42| 102|
| 2659| 119| 42| 43| 22| 22| 59|
| 1238| 191| 72| 64| 30| 46| 158|
| 4519| 103| 37| 30| 20| 20| 54|
| 1580| 162| 44| 52| 43| 41| 134|
+-----------+------------------------+----------+---------+--------+--------+--------------+
only showing top 10 rows
###Markdown
We can see the result is same as the results form the DF APIs. 4.2 Summary Statistics:Since we have the summary data readily available we compute some basic statistics on it.
###Code
//Referring to cust_profile_df from section 4.1.1 we can use the describe() function to get the summary statistics
cust_profile_df.select(cust_profile_df.columns.filter(c => !c.equals("customer_id")).map(col): _*).describe().show()
// store the describe dataframe temporarily
var summary_stats_df: DataFrame = cust_profile_df.select(cust_profile_df.columns.filter(c => !c.equals("customer_id")).map(col): _*).describe()
###Output
_____no_output_____
###Markdown
4.2.1 Prettifying Summary Statistics:There are too many decimal places for mean and stddev in the describe() dataframe. We can format the numbers to just show up to two decimal places. Pay careful attention to the datatypes that describe() returns, its a String, we need to cast that to a float before we can format. We use cast() and format_number() on individual columns to reformat.
###Code
summary_stats_df.select(summary_stats_df("summary"),
format_number(summary_stats_df("count(DISTINCT track_id)").cast("float"), 2).alias("count(DISTINCT track_id)"),
format_number(summary_stats_df("sum(night)").cast("float"), 2).alias("sum(night)"),
format_number(summary_stats_df("sum(morn)").cast("float"), 2).alias("sum(morn)"),
format_number(summary_stats_df("sum(aft)").cast("float"), 2).alias("sum(aft)"),
format_number(summary_stats_df("sum(is_mobile)").cast("float"), 2).alias("sum(is_mobile)"))
.show()
###Output
+-------+------------------------+----------+---------+--------+--------------+
|summary|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(is_mobile)|
+-------+------------------------+----------+---------+--------+--------------+
| count| 5,000.00| 5,000.00| 5,000.00|5,000.00| 5,000.00|
| mean| 170.29| 58.30| 58.29| 41.64| 121.55|
| stddev| 117.04| 67.27| 67.40| 47.88| 148.80|
| min| 68.00| 15.00| 16.00| 9.00| 32.00|
| max| 1,617.00| 2,139.00| 2,007.00|1,460.00| 5,093.00|
+-------+------------------------+----------+---------+--------+--------------+
###Markdown
4.2.2 Prettifying Summary Statistics - Even Smarter:In real life data sets there would be too many columns. Specifying each columm in the codes would not be feasible. We can use list comprehension of Python of for loops to do this smartly. We can even exclude some columns we dont' want.**Apply for loop on formatting columns and excluding the summary column:**
###Code
for(col_name <- summary_stats_df.columns.filter(col_name => !col_name.equals("summary"))) {
summary_stats_df = summary_stats_df.withColumn(col_name, format_number(col(col_name).cast("float"), 2))
}
summary_stats_df.show()
###Output
+-------+------------------------+----------+---------+--------+--------+--------------+
|summary|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(eve)|sum(is_mobile)|
+-------+------------------------+----------+---------+--------+--------+--------------+
| count| 5,000.00| 5,000.00| 5,000.00|5,000.00|5,000.00| 5,000.00|
| mean| 170.29| 58.30| 58.29| 41.64| 41.76| 121.55|
| stddev| 117.04| 67.27| 67.40| 47.88| 48.01| 148.80|
| min| 68.00| 15.00| 16.00| 9.00| 9.00| 32.00|
| max| 1,617.00| 2,139.00| 2,007.00|1,460.00|1,480.00| 5,093.00|
+-------+------------------------+----------+---------+--------+--------+--------------+
###Markdown
Interpreting the summary statistics:> People Listen to highest number of songs in the Night! 4.3 An ODE to RDD - Compute Customer Hourly Summary using Custom Group Function:If you and must have to work with RDD instead of DataFrames, then we can compute a summary profile for each user by passing a function we'll write to mapValues to compute the same high-level data:+ Average number of tracks listened during each period of the day: morning, afternoon, evening, and night. We arbitrarily define the time ranges in the code.+ Total unique tracks listened by that user, i.e. the set of unique track IDs.+ Total mobile tracks listened by that user, i.e. the count of tracks that were listened that had their mobile flag set.
###Code
//let's select only the original columns
val music_rdd: RDD[Row] = music_df.select("customer_id", "track_id", "datetime", "is_mobile", "zip").rdd.cache()
music_rdd.take(5).foreach(println)
###Output
[48,453,2014-10-23 03:26:20,0,72132]
[1081,19,2014-10-15 18:32:14,1,17307]
[532,36,2014-12-10 15:33:16,1,66216]
[2641,822,2014-10-20 02:24:55,1,36690]
[2251,338,2014-11-18 07:16:05,1,61377]
###Markdown
**Use customer_id as the key:**
###Code
//Use customer_id as the key, we will later group by on this column
music_rdd.map(record => (record(0), record)).take(5).foreach(println)
###Output
(48,[48,453,2014-10-23 03:26:20,0,72132])
(1081,[1081,19,2014-10-15 18:32:14,1,17307])
(532,[532,36,2014-12-10 15:33:16,1,66216])
(2641,[2641,822,2014-10-20 02:24:55,1,36690])
(2251,[2251,338,2014-11-18 07:16:05,1,61377])
###Markdown
**Develop the User Stats function:**We loop over the tracks of each customer and find the unique number of tracks listened by him and how many times he listened during various times of the day.
###Code
def compute_stats_byuser(tracks: Iterable[Row]) : (Double, Double, Double, Double, Double, Double) = {
var mcount, morn, aft, eve, night = 0
val tracklist: Set[String] = Set()
for(track <- tracks) {
//println(track)
//println(track.schema)
val custid = track.getAs[Int](0)
val trackid = track.getAs[String](1)
val hour = track.getAs[String](2).split(" ")(1).split(":")(0).toInt
val mobile = track.getAs[Int](3)
val zip = track.getAs[Int](4)
tracklist.add(trackid)
mcount += mobile
if (hour < 5) {
night += 1
} else if (hour < 12) {
morn += 1
} else if (hour < 17) {
aft += 1
} else if (hour < 22) {
eve += 1
} else {
night += 1
}
}
(tracklist.size, morn, aft, eve, night, mcount)
}
val cust_profile_rdd = (music_rdd.map(record => (record(0), record))
.groupByKey().mapValues(tracks => compute_stats_byuser(tracks)))
cust_profile_rdd.cache()
cust_profile_rdd.take(10).foreach(println)
###Output
(4904,(106.0,40.0,20.0,20.0,33.0,72.0))
(4552,(105.0,24.0,22.0,33.0,32.0,91.0))
(3456,(96.0,29.0,23.0,25.0,23.0,47.0))
(4680,(91.0,25.0,21.0,22.0,29.0,49.0))
(1080,(185.0,71.0,49.0,29.0,58.0,98.0))
(320,(313.0,108.0,68.0,92.0,124.0,328.0))
(752,(260.0,87.0,70.0,66.0,78.0,158.0))
(3272,(112.0,38.0,20.0,27.0,35.0,53.0))
(408,(272.0,101.0,71.0,58.0,91.0,231.0))
(4352,(104.0,33.0,26.0,24.0,31.0,86.0))
###Markdown
**Compare the Results that we got from RDD and previously from DF methods:**
###Code
cust_profile_rdd.filter(record => record._1 == 48).take(1)
cust_profile_df.filter(col("customer_id") === 48).show()
###Output
+-----------+------------------------+----------+---------+--------+--------+--------------+
|customer_id|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(eve)|sum(is_mobile)|
+-----------+------------------------+----------+---------+--------+--------+--------------+
| 48| 696| 277| 310| 217| 223| 503|
+-----------+------------------------+----------+---------+--------+--------+--------------+
###Markdown
Woo Hoo! We can clearly see that the values in each of the columns are matching! We are on the right track!**Summary Statistics:**Since we have the summary data readily available we compute some basic statistics on it. Since we are working on the RDD we cannot use the `describe()` method of the DataFrame. Instead we will use the `Statistics` package for the `colStats` function from `org.apache.spark.mllib.stat.Statistics`.
###Code
import org.apache.spark.mllib.stat.Statistics
//compute aggregate stats for entire track history
val summary_stats_ml = Statistics.colStats(cust_profile_rdd.map(x => Vectors.dense(Array(x._2._1, x._2._2, x._2._3, x._2._4, x._2._5, x._2._6))))
println(summary_stats_ml.count)
println(summary_stats_ml.mean)
println(summary_stats_ml.max)
println(summary_stats_ml.min)
###Output
[68.0,16.0,9.0,9.0,15.0,32.0]
###Markdown
4.4 PIVOT Tables With Multiples WHENs - Compute Customer Hourly Summary:If you intend to venture on using more advanced functions in Spark, then we can use the `pivot` function to do whate we have done doe now in much shorter steps.First we extract the hour, convert that hour into several buckets and then pivot on those buckets.
###Code
music_df.select(
col("event_id"),
col("customer_id"),
col("track_id"),
col("datetime"),
col("is_mobile"),
col("zip"),
hour(col("datetime")).alias("hour")
).show(10)
// Create the hour buckets
music_df
.select(col("event_id"), col("customer_id"), col("track_id"), col("datetime"), col("is_mobile"), col("zip"),
hour(col("datetime")).alias("hour"),
when((hour(col("datetime")) < 5) || (hour(col("datetime")) >= 22), lit("night"))
.when((hour(col("datetime")) >= 5) && (hour(col("datetime")) < 12), lit("morn"))
.when((hour(col("datetime")) >= 12) && (hour(col("datetime")) < 17), lit("aft"))
.when((hour(col("datetime")) >= 17) && (hour(col("datetime")) < 22), lit("eve"))
.alias("bucket")
).show(10)
// Create the hour buckets and then pivot on the hour buckets
val hourly_pivot_df = music_df
.select(col("event_id"), col("customer_id"), col("track_id"), col("datetime"),
col("is_mobile"), col("zip"), hour(col("datetime")).alias("hour"),
when((hour(col("datetime")) < 5) || (hour(col("datetime")) >= 22), lit("night"))
.when((hour(col("datetime")) >= 5) && (hour(col("datetime")) < 12), lit("morn"))
.when((hour(col("datetime")) >= 12) && (hour(col("datetime")) < 17), lit("aft"))
.when((hour(col("datetime")) >= 17) && (hour(col("datetime")) < 22), lit("eve"))
.alias("bucket"))
.select("customer_id", "bucket")
.groupBy("customer_id")
.pivot("bucket", Array("night", "morn", "aft", "eve"))
.agg(count(col("bucket"))
).cache()
hourly_pivot_df.show(10)
###Output
+-----------+-----+----+---+---+
|customer_id|night|morn|aft|eve|
+-----------+-----+----+---+---+
| 471| 84| 96| 60| 73|
| 3175| 35| 28| 25| 21|
| 833| 70| 75| 48| 63|
| 1088| 69| 62| 41| 46|
| 463| 103| 99| 84| 76|
| 1238| 72| 64| 30| 46|
| 1645| 55| 42| 54| 35|
| 1342| 53| 60| 36| 42|
| 1959| 42| 43| 34| 24|
| 2366| 55| 46| 30| 25|
+-----------+-----+----+---+---+
only showing top 10 rows
###Markdown
**Compare the Profile Summary that we got from Multi Step DF API and SQL above and the Pivot operation:**
###Code
hourly_pivot_df.filter(col("customer_id") === 48).show()
cust_profile_df.filter(col("customer_id") === 48).show()
###Output
+-----------+------------------------+----------+---------+--------+--------+--------------+
|customer_id|count(DISTINCT track_id)|sum(night)|sum(morn)|sum(aft)|sum(eve)|sum(is_mobile)|
+-----------+------------------------+----------+---------+--------+--------+--------------+
| 48| 696| 277| 310| 217| 223| 503|
+-----------+------------------------+----------+---------+--------+--------+--------------+
###Markdown
**YAY!** We can clearly see that the results from our pivot operation and the results we got from DF API and SQL constructs are matching! To make it exactly we would need give a final touch!Gather the stats for no. of unique tracks and is_mobile count separately and then join with the pivot table.
###Code
val tracks_summary_df = (music_df
.select("customer_id", "track_id", "is_mobile")
.groupBy("customer_id")
.agg(countDistinct("track_id"), sum("is_mobile"))
).cache()
tracks_summary_df.show(10)
(tracks_summary_df
.join(hourly_pivot_df, hourly_pivot_df("customer_id") === tracks_summary_df("customer_id"), "inner")
.select(hourly_pivot_df("customer_id"), col("count(DISTINCT track_id)"),
col("night").alias("sum(night)"),
col("morn").alias("sum(morn)"),
col("aft").alias("sum(aft)"),
col("eve").alias("sum(eve)"),
col("sum(is_mobile)"))
.filter(col("customer_id") === 48)
).show()
cust_profile_df.filter(col("customer_id") === 48).show()
hourly_pivot_df.unpersist()
tracks_summary_df.unpersist()
###Output
_____no_output_____
###Markdown
4.5 PIVOT & UNPIVOT Tables:Often times we would need to UNPIVOT tables. This is just the reverse of PIVOT function, converting from a wide format to narrow format. It is similar to pandas `melt` function.We can realise that through a combination of tsruct `explode(array({struct(,)}*))` transformations.First we extract the hour, convert that hour into several buckets and then pivot on those buckets to create the hourly_pivot DataFrame.
###Code
// Create the hour buckets and then pivot on the hour buckets
val hourly_pivot_df = (music_df.select($"event_id", $"customer_id", $"track_id", $"datetime", $"is_mobile", $"zip",
hour($"datetime").alias("hour"),
when((hour($"datetime") < 5) or (hour($"datetime") >= 22), lit("night"))
.when((hour($"datetime") >= 5) and (hour($"datetime") < 12), lit("morn"))
.when((hour($"datetime") >= 12) and (hour($"datetime") < 17), lit("aft"))
.when((hour($"datetime") >= 17) and (hour($"datetime") < 22), lit("eve"))
.alias("bucket"))
.select("customer_id", "bucket")
.groupBy("customer_id")
.pivot("bucket", Array("night", "morn", "aft", "eve"))
.agg(count($"bucket"))
).cache()
hourly_pivot_df.show(10)
###Output
+-----------+-----+----+---+---+
|customer_id|night|morn|aft|eve|
+-----------+-----+----+---+---+
| 471| 84| 96| 60| 73|
| 3175| 35| 28| 25| 21|
| 833| 70| 75| 48| 63|
| 1088| 69| 62| 41| 46|
| 463| 103| 99| 84| 76|
| 1238| 72| 64| 30| 46|
| 1645| 55| 42| 54| 35|
| 1342| 53| 60| 36| 42|
| 1959| 42| 43| 34| 24|
| 2366| 55| 46| 30| 25|
+-----------+-----+----+---+---+
only showing top 10 rows
###Markdown
Then we convert each column into a struct column and the combine all those struct columns to form an array of struct columns. It is important to provide same names to the individual elements with the struct columns otherwise the array function will complain that it has not been provided with similar elements e.g. `struct(lit("night").alias("bucket"), col("night").alias("count"))`
###Code
(hourly_pivot_df
.select($"customer_id",
array(
struct(lit("night").alias("bucket"), col("night").alias("count")),
struct(lit("morn").alias("bucket"), col("morn").alias("count")),
struct(lit("aft").alias("bucket"), col("aft").alias("count")),
struct(lit("eve").alias("bucket"), col("eve").alias("count"))
).alias("array_of_struct_bucket_count")
)).show(10, false)
###Output
+-----------+------------------------------------------------+
|customer_id|array_of_struct_bucket_count |
+-----------+------------------------------------------------+
|471 |[[night, 84], [morn, 96], [aft, 60], [eve, 73]] |
|3175 |[[night, 35], [morn, 28], [aft, 25], [eve, 21]] |
|833 |[[night, 70], [morn, 75], [aft, 48], [eve, 63]] |
|1088 |[[night, 69], [morn, 62], [aft, 41], [eve, 46]] |
|463 |[[night, 103], [morn, 99], [aft, 84], [eve, 76]]|
|1238 |[[night, 72], [morn, 64], [aft, 30], [eve, 46]] |
|1645 |[[night, 55], [morn, 42], [aft, 54], [eve, 35]] |
|1342 |[[night, 53], [morn, 60], [aft, 36], [eve, 42]] |
|1959 |[[night, 42], [morn, 43], [aft, 34], [eve, 24]] |
|2366 |[[night, 55], [morn, 46], [aft, 30], [eve, 25]] |
+-----------+------------------------------------------------+
only showing top 10 rows
###Markdown
We then explode the array of structs column so that now each struct column becomes a row.
###Code
(hourly_pivot_df
.select($"customer_id",
explode(
array(
struct(lit("night").alias("bucket"), col("night").alias("count")),
struct(lit("morn").alias("bucket"), col("morn").alias("count")),
struct(lit("aft").alias("bucket"), col("aft").alias("count")),
struct(lit("eve").alias("bucket"), col("eve").alias("count"))
)
).alias("exploded_struct_bucket_count")
)).show(10, false)
###Output
+-----------+----------------------------+
|customer_id|exploded_struct_bucket_count|
+-----------+----------------------------+
|471 |[night, 84] |
|471 |[morn, 96] |
|471 |[aft, 60] |
|471 |[eve, 73] |
|3175 |[night, 35] |
|3175 |[morn, 28] |
|3175 |[aft, 25] |
|3175 |[eve, 21] |
|833 |[night, 70] |
|833 |[morn, 75] |
+-----------+----------------------------+
only showing top 10 rows
###Markdown
And finaly, We break exploded struct column into ite individual components and extract them out as separate columns.
###Code
(hourly_pivot_df
.withColumn("exploded_struct_bucket_count",
explode(
array(
struct(lit("night").alias("bucket"), col("night").alias("count")),
struct(lit("morn").alias("bucket"), col("morn").alias("count"))
)
)
)
.selectExpr("customer_id", "exploded_struct_bucket_count.bucket as bucket", "exploded_struct_bucket_count.count as count")
).show(10, false)
hourly_pivot_df.unpersist()
###Output
_____no_output_____
###Markdown
4.6 Average number of tracks listened by Customers of Different Levels during Different Time of the Day:
###Code
cust_df.show(5)
// Define a udf to Map from level number to actual level string
val udfIndexTolevel: UserDefinedFunction = udf((mon: Int) => {
val level_map: Map[Int, String] = Map(0 -> "Free", 1 -> "Silver", 2 -> "Gold")
level_map.get(mon)
}, StringType)
var result_df: DataFrame =
(cust_df.join(cust_profile_df, cust_df("customer_id") === cust_profile_df("customer_id"), "inner")
.select(udfIndexTolevel(col("level")).alias("level"),
col("sum(night)"), col("sum(morn)"), col("sum(aft)"), col("sum(eve)"))
.groupBy("level")
.agg(avg("sum(aft)").alias("Afternoon"),
avg("sum(eve)").alias("Evening"),
avg("sum(morn)").alias("Morning"),
avg("sum(night)").alias("Night")
)
)
result_df.show()
###Output
+------+------------------+------------------+-----------------+------------------+
| level| Afternoon| Evening| Morning| Night|
+------+------------------+------------------+-----------------+------------------+
|Silver| 42.12979890310786|42.409506398537474|59.01401584399756| 59.16209628275442|
| Gold|39.868173258003765| 40.22975517890772|56.35969868173258|55.685499058380415|
| Free| 41.6944837340877|41.675035360678926|58.23373408769449| 58.2963224893918|
+------+------------------+------------------+-----------------+------------------+
###Markdown
4.7 Distribution of Customers By Level:
###Code
result_df =
(cust_df.select(col("level"), when(col("gender") === 0, "Male").otherwise("Female").alias("gender"))
.groupBy(col("level"))
.pivot("gender")
.count()
.orderBy(desc("level")))
result_df.show(5)
###Output
+-----+------+----+
|level|Female|Male|
+-----+------+----+
| 2| 201| 330|
| 1| 670| 971|
| 0| 1145|1683|
+-----+------+----+
###Markdown
4.8 Top 10 Zip Codes: Which regions consume most from this service:
###Code
result_df = cust_df.groupBy("zip").count().orderBy(desc("count")).limit(10)
result_df.show()
###Output
+-----+-----+
| zip|count|
+-----+-----+
| 5341| 4|
|80821| 4|
|71458| 3|
|31409| 3|
|70446| 3|
|20098| 3|
|80459| 3|
|57445| 3|
|78754| 3|
|47577| 3|
+-----+-----+
###Markdown
4.9 Distribution of Customers By SignUp Campaign:
###Code
// Define a udf to Map from campaign number to actual campaign description
val udfIndexToCampaign: UserDefinedFunction = udf((camptype: Int) => {
val campaign_map: Map[Int, String] = Map(0 -> "None", 1 -> "30DaysFree", 2 -> "SuperBowl", 3 -> "RetailStore", 4 -> "WebOffer")
campaign_map.get(camptype)
}, StringType)
result_df = (cust_df
.select(udfIndexToCampaign(col("campaign")).alias("campaign"))
.groupBy("campaign")
.count()
.orderBy("count"))
result_df.show()
###Output
+-----------+-----+
| campaign|count|
+-----------+-----+
| SuperBowl| 240|
|RetailStore| 489|
| None| 608|
| WebOffer| 750|
| 30DaysFree| 2913|
+-----------+-----+
###Markdown
4.10 Average Unique Track Count By Customer Level:
###Code
result_df = (music_df.select("customer_id", "track_id")
.groupBy("customer_id")
.agg(countDistinct("track_id").alias("unique_track_count"))
.join(cust_df, music_df("customer_id") === cust_df("customer_id"), "inner")
.select(udfIndexTolevel(col("level")).alias("level"), col("unique_track_count"))
.groupBy("level")
.agg(avg("unique_track_count").alias("avg_unique_track_count")))
result_df.show()
###Output
+------+----------------------+
| level|avg_unique_track_count|
+------+----------------------+
|Silver| 170.2772699573431|
| Gold| 166.85310734463278|
| Free| 170.9515558698727|
+------+----------------------+
###Markdown
4.11 Mobile Tracks Count By Customer Level:
###Code
result_df = (music_df.select("customer_id", "track_id")
.filter(col("is_mobile") === 1)
.groupBy("customer_id")
.count()
.withColumnRenamed("count", "mobile_track_count")
.join(cust_df, music_df("customer_id") === cust_df("customer_id"), "inner")
.select(udfIndexTolevel(col("level")).alias("level"), col("mobile_track_count"))
.groupBy("level")
.agg(avg("mobile_track_count").alias("avg_mobile_track_count"))
.orderBy("avg_mobile_track_count"))
result_df.show()
###Output
+------+----------------------+
| level|avg_mobile_track_count|
+------+----------------------+
| Free| 100.01308345120226|
|Silver| 146.1614868982328|
| Gold| 160.22033898305085|
+------+----------------------+
###Markdown
5. Destroying the Spark Session & Cleaning Up
###Code
music_df.unpersist()
cust_df.unpersist()
click_df.unpersist()
spark.stop()
###Output
_____no_output_____ |
simulations/spatial_prior/spatial_prior.ipynb | ###Markdown
Importing packages and data
###Code
#!/usr/bin/env python3
# -*- coding: utf-8
import copy
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scanpy as sc
import scvi
import torch
from scvi.model import SCVI, CondSCVI
from sklearn.cluster import KMeans
from sklearn.neighbors import NearestNeighbors
from destvi_spatial import DestVISpatial
scvi.settings.reset_logging_handler()
import logging
sys.path.append("/data/yosef2/users/pierreboyeau/DestVI-reproducibility/simulations")
from utils import (find_location_index_cell_type, get_mean_normal,
metrics_vector)
logger = logging.getLogger("scvi")
def construct_neighboors(adata, n_neighbors=5):
locs = adata.obsm["locations"]
nbrs = NearestNeighbors(n_neighbors=n_neighbors, algorithm="ball_tree").fit(locs)
idx_to_neighs = nbrs.kneighbors(locs)[1][:, 1:]
n_indices_ = torch.tensor(idx_to_neighs)
X = torch.tensor(adata.X.todense())
X_neigh = X[n_indices_]
return X_neigh.numpy(), n_indices_.numpy()
def construct_spatial_partition(adata, n_cv=5):
locs = adata.obsm["locations"]
clust = KMeans(n_clusters=n_cv, n_init=100)
attribs = clust.fit_predict(locs)
return attribs
WORKING_DIR = "/data/yosef2/users/pierreboyeau/scvi-tools/simulations_code"
input_dir = os.path.join(WORKING_DIR, "out/")
output_suffix = "destvi"
sc_epochs = 250
st_epochs = 250
# sc_epochs = 2
# st_epochs = 2
amortization = "latent"
sc_adata = sc.read_h5ad(input_dir + "sc_simu.h5ad")
st_adata = sc.read_h5ad(input_dir + "st_simu.h5ad")
logger.info("Running DestVI")
output_dir = input_dir + output_suffix + "_" + amortization + "/"
if not os.path.isdir(output_dir):
logger.info("Directory doesn't exist, creating it")
os.mkdir(output_dir)
else:
logger.info(f"Found directory at:{output_dir}")
def destvi_get_metrics(spatial_model):
# second get the proportion estimates
proportions = spatial_model.get_proportions().values
agg_prop_estimates = proportions
# third impute at required locations
# for each cell type, query the model at certain locations and compare to groundtruth
# create a global flush for comparaison across cell types
imputed_expression = np.zeros_like(s_groundtruth)
for ct in range(C):
indices, _ = find_location_index_cell_type(
st_adata.obsm["locations"], ct, s_location, s_ct
)
expression = spatial_model.get_scale_for_ct(
spatial_model.cell_type_mapping[ct], indices=indices
).values
normalized_expression = expression / np.sum(expression, axis=1)[:, np.newaxis]
# flush to global
indices_gt = np.where(s_ct == ct)[0]
imputed_expression[indices_gt] = normalized_expression
all_res = []
all_res_long = []
for ct in range(C):
# get local scores
indices_gt = np.where(s_ct == ct)[0]
# potentially filter genes for local scores only
gene_list = np.unique(
np.hstack([np.where(components_[ct, i] != 0)[0] for i in range(D)])
)
res = metrics_vector(
s_groundtruth[indices_gt],
imputed_expression[indices_gt],
scaling=2e5,
feature_shortlist=gene_list,
)
res_long = metrics_vector(
s_groundtruth[indices_gt], imputed_expression[indices_gt], scaling=2e5
)
all_res.append(pd.Series(res))
all_res_long.append(pd.Series(res_long))
all_res.append(
pd.Series(metrics_vector(s_groundtruth, imputed_expression, scaling=2e5))
)
all_res = all_res + all_res_long
df = pd.concat(all_res, axis=1)
prop_score = metrics_vector(st_adata.obsm["cell_type"], agg_prop_estimates)
df = pd.concat([df, pd.Series(prop_score)], axis=1)
df.columns = (
["ct" + str(i) for i in range(5)]
+ ["ct_long" + str(i) for i in range(5)]
+ ["allct", "proportions"]
)
return df.T.reset_index().rename(columns=dict(index="where_ct"))
###Output
_____no_output_____
###Markdown
Training single-cell model & gathering spot neighborhood information
###Code
# setup ann data
scvi.data.setup_anndata(sc_adata, labels_key="cell_type")
mapping = sc_adata.uns["_scvi"]["categorical_mappings"]["_scvi_labels"]["mapping"]
# train sc-model
sc_model = CondSCVI(sc_adata, n_latent=4, n_layers=2, n_hidden=128)
sc_model.train(
max_epochs=sc_epochs,
plan_kwargs={"n_epochs_kl_warmup": 2},
progress_bar_refresh_rate=1,
)
plt.plot(sc_model.history["elbo_train"], label="train")
plt.title("ELBO on train set over training epochs")
plt.legend()
plt.savefig(output_dir + "sc_model_training.png")
plt.clf()
_sc_model = copy.deepcopy(sc_model)
x_n, ind_n = construct_neighboors(st_adata, n_neighbors=5)
attribs = construct_spatial_partition(st_adata)
st_adata.obsm["x_n"] = x_n
st_adata.obsm["ind_n"] = ind_n
scvi.data.setup_anndata(st_adata)
amortization = "latent"
loc_df = pd.DataFrame(st_adata.obsm["locations"])
loc_df.columns = ["x", "y"]
loc_df = (
loc_df
.reset_index()
.rename(columns={"index": "spot"})
)
gt_props = pd.DataFrame(st_adata.obsm["cell_type"])
gt_props.columns = ["ct0", "ct1", "ct2", "ct3", "ct4"]
gt_props = (
gt_props
.stack()
.to_frame("proportion_gt")
.reset_index()
.rename(columns={"level_0": "spot", "level_1": "celltype"})
)
rdm_indices = [1, 25, 100, 1000]
plt.scatter(st_adata.obsm["locations"][rdm_indices, 0], st_adata.obsm["locations"][rdm_indices, 1])
n_locs = st_adata.obsm["locations"][st_adata.obsm["ind_n"][rdm_indices]].reshape(-1, 2)
plt.scatter(n_locs[:, 0], n_locs[:, 1])
###Output
_____no_output_____
###Markdown
Step 0: get ground-truth
###Code
param_path = "/data/yosef2/users/pierreboyeau/data/spatial_data/"
PCA_path = param_path + "grtruth_PCA.npz"
grtruth_PCA = np.load(PCA_path)
mean_, components_ = grtruth_PCA["mean_"], grtruth_PCA["components_"]
C = components_.shape[0]
D = components_.shape[1]
threshold_gt = 0.4
spot_selection = np.where(st_adata.obsm["cell_type"].max(1) > threshold_gt)[0]
s_location = st_adata.obsm["locations"][spot_selection]
s_ct = st_adata.obsm["cell_type"][spot_selection, :].argmax(1)
s_gamma = st_adata.obsm["gamma"][spot_selection]
s_groundtruth = get_mean_normal(s_ct[:, None], s_gamma[:, None], mean_, components_)[:, 0, :]
s_groundtruth[s_groundtruth < 0] = 0
s_groundtruth = np.expm1(s_groundtruth)
s_groundtruth = s_groundtruth / np.sum(s_groundtruth, axis=1)[:, np.newaxis]
###Output
_____no_output_____
###Markdown
Step 1: estimate GT $\lambda$ Grid search
###Code
lamb_scales = np.geomspace(1e3, 1e9, 50)
WORKING_DIRB = os.path.join(WORKING_DIR, "mdl_ckpt2")
gridsearch_res = pd.DataFrame()
gridsearch_metrics_res = pd.DataFrame()
for lamb in lamb_scales:
for seed in range(3):
# for seed in range(1):
for n_neig in [3, 5]:
for mode in ["pair"]:
save_feat = [lamb, n_neig, seed]
save_feat = [str(sv) for sv in save_feat]
savename = "_".join(save_feat) + "nn_step1_25pts.pt"
mdl_path = os.path.join(WORKING_DIRB, savename)
x_n, ind_n = construct_neighboors(st_adata, n_neighbors=n_neig)
attribs = construct_spatial_partition(st_adata)
st_adata.obsm["x_n"] = x_n
st_adata.obsm["ind_n"] = ind_n
scvi.data.setup_anndata(st_adata)
print(x_n.shape)
if os.path.exists(mdl_path):
print("Model exists ...")
spatial_model_prior = DestVISpatial.load(mdl_path, st_adata)
else:
print("Model does not exists ...")
spatial_model_prior = DestVISpatial.from_rna_model(
st_adata,
sc_model,
vamp_prior_p=100,
amortization=amortization,
spatial_prior=True,
spatial_agg=mode,
lamb=lamb,
)
spatial_model_prior.train(
max_epochs=2000,
# max_epochs=2,
train_size=1,
lr=1e-2,
n_epochs_kl_warmup=400,
progress_bar_refresh_rate=0,
)
spatial_model_prior.save(mdl_path)
df = destvi_get_metrics(spatial_model_prior)
props_all_metrics = (
df
.assign(
Model=mode,
lamb=lamb,
n_neig=n_neig,
seed=seed,
)
)
gridsearch_metrics_res = (
gridsearch_metrics_res.append(props_all_metrics, ignore_index=True)
)
spatial_model_gt = DestVISpatial.from_rna_model(
st_adata,
_sc_model,
vamp_prior_p=100,
amortization=amortization,
spatial_prior=False,
)
spatial_model_gt.train(
max_epochs=1000,
# max_epochs=2,
train_size=1,
lr=1e-1,
n_epochs_kl_warmup=100,
progress_bar_refresh_rate=1,
)
df = destvi_get_metrics(spatial_model_gt)
props_all_metrics = (
df
.assign(
Model=mode,
lamb=-np.infty,
n_neig=n_neig,
)
)
gridsearch_metrics_res = gridsearch_metrics_res.append(props_all_metrics, ignore_index=True)
validation_gt_df = gridsearch_metrics_res.sort_values("lamb").copy()
###Output
_____no_output_____
###Markdown
Step 2: Gene CV Prelim: Try to find gene modules
###Code
expression = sc_model.get_normalized_expression()
cts = sc_model.adata.obs["cell_type"]
scvi_model = SCVI(sc_adata, n_latent=2, n_layers=2, n_hidden=128)
scvi_model.train(
max_epochs=sc_epochs,
plan_kwargs={"n_epochs_kl_warmup": 2},
progress_bar_refresh_rate=1,
)
latent = scvi_model.get_latent_representation()
latent_ = pd.DataFrame(latent)
latent_.index = ["cell" + str(col) for col in latent_.index]
expression_ = expression.T
expression_.columns = ["cell"+ str(col) for col in expression_.columns]
import hotspot
hs = hotspot.Hotspot(expression_, model='none', latent=latent_)
hs.create_knn_graph(weighted_graph=False, n_neighbors=30)
hs_results = hs.compute_autocorrelations()
# hs_genes = hs_results.loc[hs_results.FDR < 0.05].index # Select genes
hs_genes = hs_results.index
local_correlations = hs.compute_local_correlations(hs_genes, jobs=20) # jobs for parallelization
modules = hs.create_modules(
min_gene_threshold=30, core_only=False, fdr_threshold=0.05
)
gene_train_indices = (
modules
.groupby(modules)
.apply(lambda x: x.sample(frac=0.5).index.to_series().astype(int))
.to_frame("indices")
.reset_index()
.indices
.values
)
nfolds = 2
ngenes = st_adata.X.shape[-1]
heldout_folds = np.arange(nfolds)
gene_folds = np.isin(np.arange(ngenes), gene_train_indices)
for heldout in heldout_folds[:-1]:
training_mask = gene_folds != heldout
training_mask = torch.tensor(training_mask)
test_mask = ~training_mask
print(training_mask.sum(), test_mask.sum())
###Output
_____no_output_____
###Markdown
Grid search
###Code
st_epochs = 500
cv_results_metrics = pd.DataFrame()
for heldout in heldout_folds[:-1]:
training_mask = gene_folds != heldout
training_mask = torch.tensor(training_mask)
test_mask = ~training_mask
for lamb in lamb_scales:
save_feat = [lamb, n_neig, heldout]
save_feat = [str(sv) for sv in save_feat]
savename = "_".join(save_feat) + "_step2_25pts_500___1epcs_stratified.pt"
mdl_path = os.path.join(WORKING_DIRB, savename)
if os.path.exists(mdl_path):
spatial_model = DestVISpatial.load(mdl_path, st_adata)
spatial_model.construct_loaders()
else:
spatial_model = DestVISpatial.from_rna_model(
st_adata,
sc_model,
vamp_prior_p=100,
amortization=amortization,
spatial_prior=True,
spatial_agg="pair",
lamb=lamb,
training_mask=training_mask,
)
# Step 1: training genes
spatial_model.train(
max_epochs=1000,
# max_epochs=2,
train_size=1,
lr=1e-2,
n_epochs_kl_warmup=400,
plan_kwargs=dict(
loss_mask=training_mask,
),
progress_bar_refresh_rate=1,
)
# Step 2: heldout genes
myparameters = [spatial_model.module.eta] + [spatial_model.module.beta]
myparameters = filter(lambda p: p.requires_grad, myparameters)
spatial_model.train(
max_epochs=st_epochs,
# max_epochs=2,
train_size=1,
progress_bar_refresh_rate=1,
n_epochs_kl_warmup=400,
lr=1e-2,
plan_kwargs=dict(
loss_mask=test_mask,
myparameters=myparameters,
),
)
spatial_model.save(mdl_path)
rec_loss, rec_loss_all = spatial_model.get_metric()
gene_infos = pd.DataFrame(
{
"gene": ["full"] + list(np.arange(len(rec_loss_all))),
"reconstruction": [rec_loss] + list(rec_loss_all)
}
).assign(
heldout=heldout,
lamb=lamb,
train_phase=False
)
cv_results_metrics = cv_results_metrics.append(gene_infos, ignore_index=True)
###Output
_____no_output_____
###Markdown
Step 3: Verifying properties
###Code
plot_df_val = (
validation_gt_df
.assign(lambdd=lambda x: np.log10(x.lamb))
.assign(lambdd=lambda x: x.lambdd.fillna(0.))
# .loc[lambda x: x.Model == "pair"]
.loc[lambda x: x.where_ct == "proportions"]
.loc[lambda x: x.n_neig == 5]
.groupby(["lambdd", "n_neig", "Model", "where_ct"])
["avg_spearman", "avg_pearson", "median_l1", "mse"]
.mean()
.reset_index()
)
plot_df_cv = (
cv_results_metrics
.loc[lambda x: x.gene != "full"]
.assign(lambdd=lambda x: np.log10(x.lamb))
.groupby("lambdd")
.reconstruction
# .median()
.max()
.reset_index()
)
met_name = "avg_spearman"
orac_name = "reconstruction"
ref_met = plot_df_val.iloc[0][met_name]
plt.rcParams['ps.useafm'] = True
plt.rcParams['pdf.use14corefonts'] = True
plt.rcParams['svg.fonttype'] = 'none'
plt.rcParams['text.usetex'] = False
fig, axes = plt.subplots(nrows=1, sharex=True, figsize=(4, 3))
plt.plot(
plot_df_val.iloc[1:].lambdd,
plot_df_val.iloc[1:][met_name],
c="blue"
)
lambdstar = plot_df_val.iloc[1:][met_name]
xmin, xmax = plt.xlim()
orac = plot_df_val.iloc[1:].set_index("lambdd")[met_name]
lambdstar = orac.idxmax()
corrmax = orac.max()
plt.hlines(ref_met, xmin, xmax, color="black")
plt.ylabel("Mean proportions correlation", c="blue")
ymin, ymax = plt.ylim()
plt.locator_params(axis='y', nbins=5)
plt.vlines(lambdstar, ymin, corrmax, color="black", linestyle="--")
plt.twinx()
plt.plot(plot_df_cv.lambdd, plot_df_cv[orac_name], c="red")
lambda_best = plot_df_cv.set_index("lambdd").reconstruction.idxmin()
reco_best = plot_df_cv.set_index("lambdd").reconstruction.min()
ymin, ymax = plt.ylim()
plt.vlines(lambda_best, ymin, reco_best, color="black", linestyle="--")
# plt.ylim(ymin, 25000)
# plt.ylim(ymin, 1800)
plt.ylabel("Heldout reconstruction error", c="red")
plt.xlabel("Prior strenght")
plt.locator_params(axis='y', nbins=6)
ticks = [4, 8, lambda_best, lambdstar]
labels = [
4,
8,
"$\hat \lambda \approx$ {:.2f}".format(lambda_best),
"$\lambda\^ \star \approx$ {:.2f}".format(lambdstar)
]
plt.xticks(ticks, labels)
# plt.xticks(list(plt.xticks()[0]) + [lambda_best])
plt.tight_layout()
# plt.savefig("spatialprior.svg")
###Output
_____no_output_____ |
notebooks/0.0.0_tensorflow_gpu.ipynb | ###Markdown
Are there any GPUs?
###Code
tf.__version__
keras.__version__
def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print(get_available_devices())
###Output
['/device:CPU:0', '/device:GPU:0']
###Markdown
Check CPU vs GPU performance
###Code
def count_seconds(matrix, num_runs=5, device='gpu'):
runtimes = []
for i in range(num_runs):
start = time.perf_counter()
with tf.device(f'/{device}:0'):
a = tf.constant(matrix, shape=[*matrix.shape], dtype=tf.float32, name='a')
b = tf.constant(matrix, shape=[*matrix.shape], dtype=tf.float32, name='b')
c = tf.matmul(a, b)
elapsed = time.perf_counter() - start
runtimes.append(elapsed)
print(f'Elapsed {np.mean(runtimes):.3f} +/- {np.std(runtimes)/np.sqrt(num_runs):.3f} seconds on {device}.')
return np.mean(runtimes)
cpu_runtimes = []
gpu_runtimes = []
Ns = [10, 50, 100, 250, 500, 1000, 2500, 5000, 7500, 10000]
for N in Ns:
matrix = np.random.randn(N, N).astype(np.float32)
cpu_runtimes.append(count_seconds(matrix, num_runs=3, device='cpu'))
gpu_runtimes.append(count_seconds(matrix, num_runs=3, device='gpu'))
import matplotlib.pyplot as plt
plt.loglog(Ns, cpu_runtimes, '-o', label='CPU')
plt.loglog(Ns, gpu_runtimes, '-o', label='GPU')
plt.legend()
plt.ylabel('Execution time (s)')
plt.xlabel('Matrix size')
plt.show()
###Output
_____no_output_____ |
Soluciones/2. Numpy/2. Soluciones.ipynb | ###Markdown
NumPy - Ejercicios (II)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
1. Crea una función que reciba un número n, genere una matriz nxn a partir de una secuencia numérica y escriba la misma en un fichero cuyo nombre se recibirá también como parámetro. Hazlo con ficheros binarios.
###Code
def matriz(n, nombre):
array = np.arange(n*n).reshape(n,n)
np.save(nombre, array)
return array
matriz(5, 'Afi')
###Output
_____no_output_____
###Markdown
2. Crea una función que reciba el nombre de un fichero binario de NumPy, lo lea como binario, sume el contenido del mismo por filas y guarde el resultado en un fichero de texto con el mismo nombre (pero extensión txt).
###Code
def ejercicio(nombre_fichero):
array1=np.load(nombre_fichero) #nombre_fichero + '.npy'
array_suma=np.sum(array1, axis=1)
np.savetxt(nombre_fichero[:-4]+'.txt', array_suma) #nombre_fichero + '.txt'
ejercicio('Afi.npy')
###Output
_____no_output_____
###Markdown
3. Crea una función que reciba dos enteros n y m, genere dos matrices aleatorias (normales 0,1) de dimensiones nxm y mxn y haga el producto escalar de ambas para obtener una matriz nxn. Deberá devolver una tupla con las tres matrices.
###Code
def ejercicio3 (n,m):
array1= np.random.randn(n*m).reshape(n,m) #np.random.randn(n,m)
array2=np.random.randn(n*m).reshape(m,n)
array3=np.dot(array1,array2)
return(array1,array2,array3)
ejercicio3(4,5)
###Output
_____no_output_____
###Markdown
4. Crea una función que reciba una matriz de datos y un entero n y devuelva una muestra (muestreo aleatorio simple) de n elementos.
###Code
def fun_ex4 (array,a):
n,m=array.shape
list_ex4 = []
i=0
while i<a:
elemento = array[np.random.randint(0,n), np.random.randint(0,m)]
list_ex4.append(elemento)
i+=1
return list_ex4
array1=np.arange(9).reshape(3,3)
fun_ex4 (array1,5)
def muestreo(matriz,n):
a,b = matriz.shape
muestreo_filas = np.random.randint(0,a,n)
print(muestreo_filas)
muestreo_columnas = np.random.randint(0,b,n)
print(muestreo_columnas)
muestra = matriz[muestreo_filas,muestreo_columnas]
return(muestra)
mat = np.random.randn(4,5)
print(mat)
muestreo(mat,50)
def ejercicio_4(matriz,numero):
muestra = []
elementos = matriz.reshape(matriz.size)
while len(muestra) < numero:
i = np.random.randint(0,matriz.size)
muestra.append(elementos[i])
return (muestra)
ejercicio_4(mat,10)
def ejercicio_4(matriz,numero):
print(matriz)
elementos = matriz.reshape(matriz.size)
print(elementos)
m = np.random.randint(0,matriz.size,numero)
print(m)
muestra = elementos[m]
return muestra
ejercicio_4(mat,10)
###Output
[[-0.01936582 0.48906414 -1.24214273 -0.78001258 1.77402278]
[-0.93670006 -0.91937625 0.74154219 -0.69050341 0.99347612]
[-0.86079045 -0.4830283 1.24496624 -0.25878635 -0.43274072]
[ 0.56144471 0.2968298 -0.19883494 0.40690294 0.03912759]]
[-0.01936582 0.48906414 -1.24214273 -0.78001258 1.77402278 -0.93670006
-0.91937625 0.74154219 -0.69050341 0.99347612 -0.86079045 -0.4830283
1.24496624 -0.25878635 -0.43274072 0.56144471 0.2968298 -0.19883494
0.40690294 0.03912759]
[13 3 11 8 16 11 5 0 0 13]
|
week04/02_list_part_2.ipynb | ###Markdown
List Refresher to Listlist ဆိုတာ non-primitive data type or Data Structure တခု ဖြစ်တယ်။ Zero-based indexing နဲ့ access လုပ်လို့ရတာကို list လို့ ခေါ်ပါတယ်။ ```pythonmy_list = ["element", "of", "a", "list"] my_list ဟာ string ၄ ခုပါတဲ့ list တခုဖြစ်တယ်print (my_list[1] ) this will print "of"print (my_list[1:3]) this will print a list ["of", "a"]```list တွေကို လေးထောင့်ကွင်းထဲမှာ ရေးတယ်။ လေးထောင့်ကွင်းနဲ့ access လုပ်လို့ရတယ်။ စမှတ်ကနေ ဆုံးမှတ်အထိလို့ ပြောချင်ရင် ':' (colon) ကို သုံးတယ်။list ထဲမှာ string တင်မကပဲ primitive type or non-primitive type မျိုးစုံထည့်လို့ရတယ်။```pythonmy_list_2 = ["string", 42, 3.14, True] this is ok```
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____
###Markdown
The Special **Lazy** Function `range``range` ဆိုတာ number list တခုကို ထုတ်ပေးတဲ့ **lazy** function ဖြစ်တယ်။ ```pythonprint (list (range (0, 10))) this will print a list [0, 1, 2, ..., 9]``` What do you mean by "**Lazy**" ???```pythonprint (range(0, 9)) this will print an object (not the resulting list)``````pythonprint (list (range (0, 9))) only when the object is accessed (to convert to a list), it evaluates and give you the list```
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____
###Markdown
A `list` is an object as well`str` လိုပဲ `list` ဟာ object လဲ ဖြစ်တယ်။> ဒီတော့ method (function) တွေ ရှိတယ်။```pythonappend extendpop popleftindex sort reverse``` List is not primitive-type```pythonmy_list = [0, 1, 2, 3, 4]another_pointer_to_my_list = my_listfour = another_pointer_to_my_list.pop()print (my_list) what just happened ???```
###Code
my_list = [0, 1, 2, 3, 4]
another_pointer_to_my_list = my_list
four = another_pointer_to_my_list.pop()
print (my_list) # what just happened ???
###Output
_____no_output_____
###Markdown
List Comprehensionအထက်တန်း သင်္ချာမှာ set notation တွေ အမျိုးမျိုး လေ့လာခဲ့ဖူးတယ်။ အဲဒီထဲမှာ တမျိုးက $\mathbb{I} = \{i : i \text{is an integer.}\}$$\mathbb{Z} = \{i : i \text{is an integer and } i \geq 0 \}$ဆိုတာမျိုး ရေးတာ။အလားတူပဲ List Comprehension ဆိုတာ အဲဒီ ပုံစံမျိုးပဲ။ ```pythonmy_fruits = ["apple", "orange", "lemon", "lime", "banana", "watermelon", "payaya"]my_non_citrus_fruits = [x for x in my_fruits if x != "lemon" and x != "lime" and x!= "orange"]```
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____
###Markdown
List ထဲမှာ List တွေကိုလဲ ထည့်လို့ရတယ်။ ဥပမာ```pythonmy_list_of_list = [ [0, 1, 2], [3, 4, 5], [6, 7, 8]]print (my_list_of_list[0][2]) this will print 2print (my_list_of_list[2][1]) this will print 7```တခြား programming language တွေနဲ့ မတူတာက python မှာ high-dimensional array မရှိဘူး။```pythonprint (my_list_of_list[0, 2]) this will give you an error```> ဒါကြောင့် `numpy` library က ဒီကွက်လပ်ကို ဖြည့်ဖို့ array implementation တွေ provide လုပ်ထားတယ်။> `numpy` ကို နောက် အပါတ်တွေမှာ ပြောမယ်။
###Code
my_list_of_list = [
[0, 1, 2],
[3, 4, 5],
[6, 7, 8]
]
print(my_list_of_list[0][2])
###Output
_____no_output_____
###Markdown
Special Function `len``str`, `list` နဲ့ collection အားလုံးမှာ သုံးလို့ရတဲ့ built-in function တခုက `len` ဖြစ်တယ်။```pythonmy_str = "my string"my_lst = ["my", "string"]print (len(my_str))print (len(my_lst))```
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____
###Markdown
The inclusion check `in````pythonmy_list = [1, 2, "string", "blar", True]print (2 in my_list)print ("string" in my_list)print (False in my_list)``` `str` function `split``split` function ဟာ `str` ကို `list` ဖြစ်အောင် ခွဲထုတ်ပေးတယ်။```pythonmy_string = "this is a string with six spaces"my_list = my_string.split(" ")print (my_list)``````pythonanother_string = "this, string, is , comma delimited list of items, apple, banana"my_another_list = my_another_string.split(",")print (my_another_list)my_yet_another_list = [s.strip() for s in my_another_list]print (my_yet_another_list)```
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____
###Markdown
Special Function `zip``zip` ဆိုတာ alphabet 'Z' ပုံစံ ဆွဲချလိုက်တာလို့ ပြောနိုင်တယ်။ ```pythonitem_name = ["chicken", "egg", "fish"]price_per_kg = [8.00, 3.00, 12.00]item_code = ["d001", "d002", "s001"] ဒါတွေကို အတန်းလိုက် print ချင်ရင်for name, price, code in zip(item_name, price_per_kg, item_code): print ("Name : {} --> Price : {} --> Code : {}".format(name, price, code)) လို့ ရေးနိုင်တယ်။ ဒါဟာ ... for i in range (len(name)): print ("Name : {} --> Price : {} --> Code : {}".format( item_name[i], price_per_kg[i], item_code[i] ) ) လို့ ရေးတာနဲ့ အတူတူပဲ ဖြစ်တယ်။```> loop တွေ အကြောင်းကို week 04 မှာ အသေးစိတ် ပြောမယ်။
###Code
# စမ်းကြည့်ရန်
###Output
_____no_output_____ |
day 02 PyTORCH and PyCUDA/PyTorch/02 PyTorch basic Tensor operations.ipynb | ###Markdown
02 PyTorch basic Tensor operations
###Code
% reset -f
from __future__ import print_function
from __future__ import division
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
# call(["nvcc", "--version"]) does not work
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
###Output
__Python VERSION: 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609]
__pyTorch VERSION: 0.1.12+4eb448a
__CUDA VERSION
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
__CUDNN VERSION: 5110
__Number CUDA Devices: 1
__Devices
Active CUDA Device: GPU 0
Available devices 1
Current cuda device 0
###Markdown
Numpy vs PyTorch SyntaxNumpy Pytorch- np.zeros((2, 3)) torch.zeros(2,3)- np.random.rand(2, 3) torch.rand(2,3)- x.reshape(1, -1) x.view(1, -1)- x.shape x.size()- x.dot(w) x.mm(w)- x.matmul(w) x.bmm(w)- x.T x.t()- x.transpose(0, 2, 1) x.permute(0, 2, 1)- x.argmax(axis=1) _, i = x.max(dim=1)- np.sum(x, axis=1) torch.sum(x, dim=1)- np.maxium(x, 0) torch.clamp(x, min=0)- x.clone() x.copy() Torch Tensors
###Code
from __future__ import print_function
import torch
from torch.autograd import Variable
x=torch.Tensor(3,2)
print (type(x))
print (x)
# how variables work
x = Variable(x)
print ("x:" + str (x))
print ("requires grad:" + str(x.requires_grad))
print ("data:" + str(x.data))
x=torch.rand(3,4)
print (type(x))
print (x)
print (x[1:])
x.numpy()
if torch.cuda.is_available():
x = x.cuda()*2
print (type(x))
print (x)
###Output
<class 'torch.cuda.FloatTensor'>
0.9734 0.8315 0.8623 0.2715
0.9024 0.9625 0.1600 0.3012
0.3816 0.8592 1.2577 0.3798
[torch.cuda.FloatTensor of size 3x4 (GPU 0)]
|
Learn/Python/Preliminary/Introduction to Data Science with Python/Introduction to Data Science with Python.ipynb | ###Markdown
[Python “Hello World” & Statement](https://academy.dqlab.id/main/livecode/162/306/1398)
###Code
print("Hello World.")
print("Saya Aksara, baru belajar Python.")
###Output
Hello World.
Saya Aksara, baru belajar Python.
###Markdown
[Variables di Python](https://academy.dqlab.id/main/livecode/162/306/1399)
###Code
bil1 = 10
Bil_2 = 20
Frasa = "Halo Dunia"
bil1, Bil_2 = 10, 20
salam = "Selamat Pagi"; Penutup = "Salam Sejahtera"
###Output
_____no_output_____
###Markdown
[Comments di Python](https://academy.dqlab.id/main/livecode/162/306/1401)
###Code
#perintah pada baris ini tidak mempengaruhi program
'''
perintah ini tidak akan dieksekusi oleh Python
dan perintah ini juga tidak akan dieksekusi
perintah ini juga tidak akan dieksekusi
'''
print("jadi # digunakan untuk membuat comment pada Python")
###Output
jadi # digunakan untuk membuat comment pada Python
###Markdown
[Tugas Praktek](https://academy.dqlab.id/main/livecode/162/307/1406)
###Code
text = "Belajar Python di DQLab."
print(list(text))
print(tuple(text))
print(set(text))
###Output
['B', 'e', 'l', 'a', 'j', 'a', 'r', ' ', 'P', 'y', 't', 'h', 'o', 'n', ' ', 'd', 'i', ' ', 'D', 'Q', 'L', 'a', 'b', '.']
('B', 'e', 'l', 'a', 'j', 'a', 'r', ' ', 'P', 'y', 't', 'h', 'o', 'n', ' ', 'd', 'i', ' ', 'D', 'Q', 'L', 'a', 'b', '.')
{'h', 'a', 'l', 'e', 'r', 'P', 'y', 'j', 'n', 'L', 't', 'd', '.', 'o', 'Q', 'B', ' ', 'D', 'i', 'b'}
###Markdown
[Menggunakan Library di Python](https://academy.dqlab.id/main/livecode/162/307/1408)
###Code
import math
import numpy as np
import pandas as pd
import seaborn as sns
###Output
_____no_output_____ |
examples/Neural_network_control_flow_power_iteration.ipynb | ###Markdown
In CoreML Neural Network Specification version 4 (which is available from iOS 13 and MacOS 10.15), several "control-flow" layers have been added. CoreML spec is described in the protobuf format and for a list of all supported layer types and documentation, see [here](https://github.com/apple/coremltools/blob/master/mlmodel/format/NeuralNetwork.proto).In this notebook, we build a neural network that uses a few of the new control flow layers. We will write a simple python program to compute the largest eigenvalue of a given matrix and then show how a neural network can be built to replicate that program in an mlmodel.We choose the [power iteration method](https://en.wikipedia.org/wiki/Power_iteration). It is a simple iterative algorithm. Given a square matrix, $A$ of dimensions $n\times n$, it computes the largest eigenvalue (by magnitude) and the corresponding eigenvector (the algorithm can be adapted to compute all the eigenvalues, however we do not implement that here). Here is how the algorithm works. Pick a normalized random vector to start with, $x$, of dimension $n$. Repetitively, multiply it by the matrix and normalize it, i.e., $x\leftarrow Ax$ and $x\leftarrow \frac{x}{\left \| x \right \|}$. Gradually the vector converges to the largest eigenvector. Simple as that! There are a few conditions that the matrix should satisfy for this to happen, but let us not worry about it for this example. For now we will assume that the matrix is real and symmetric, this guarantees the eigenvalues to be real. After we have the normalized eigenvector, the corresponding eigenvalue can be computed by the formula $x^TAx$ Let's code this up in Python using Numpy!
###Code
import numpy as np
import copy
np.random.seed(8) # try different seeds to play with the number of iterations it takes for convergence!
'''
Use power method to compute the largest eigenvalue of a real symmetric matrix
'''
convergence_tolerance = 1e-6 # decrease/increase to trade off precision
number_of_iterations = 100 # decrease/increase to trade off precision
def power_iteration(matrix, starting_vector):
x = copy.deepcopy(starting_vector)
for i in range(number_of_iterations):
y = np.matmul(A,x)
#normalize
y = y / np.sqrt(np.sum(y**2))
# compute the diff to check for convergence
# we use cosine difference as both vectors are normalized and can get
# rotated by 180 degrees between iterations
diff = 1-abs(np.dot(x,y))
# update x
x = y
print('{}: diff: {}'.format(i, diff))
if diff < convergence_tolerance:
break
x_t = np.transpose(x)
eigen_value = np.matmul(x_t, np.matmul(A,x))
return eigen_value, x
# define the symmetric real matrix for which we need the eigenvalue.
A = np.array([[4,-5], [-5,3]], dtype=np.float)
# a random starting vector
starting_vector = np.random.rand(2)
starting_vector = starting_vector / np.sqrt(np.sum(starting_vector**2)) ## normalize it
eigen_value, eigen_vector = power_iteration(A, starting_vector)
print('Largest eigenvalue: %.4f ' % eigen_value)
print('Corresponding eigenvector: ', eigen_vector)
###Output
0: diff: 6.69187030143e-05
1: diff: 0.00208718410489
2: diff: 0.0614522880272
3: diff: 0.771617699317
4: diff: 0.193129218664
5: diff: 0.0075077446807
6: diff: 0.000241962094403
7: diff: 7.74407193072e-06
8: diff: 2.47796068775e-07
Largest eigenvalue: 8.5249
('Corresponding eigenvector: ', array([-0.74152421, 0.67092611]))
###Markdown
We see that in this case, the algorithm converged, given our specified toelrance, in 9 iterations. To confirm whether the eigenvalue is correct, lets use the "linalg" sub-package of numpy.
###Code
from numpy import linalg as LA
e, v = LA.eig(A)
idx = np.argmax(abs(e))
print('numpy linalg: largest eigenvalue: %.4f ' % e[idx])
print('numpy linalg: first eigenvector: ', v[:,idx])
###Output
numpy linalg: largest eigenvalue: 8.5249
('numpy linalg: first eigenvector: ', array([ 0.74145253, -0.67100532]))
###Markdown
Indeed we see that the eigenvalue matches with our power iteration code. The eigenvector is rotated by 180 degrees, but that is fine.Now, lets build an mlmodel to do the same. We use the builder API provided by coremltools to write out the protobuf messages.
###Code
import coremltools
import coremltools.models.datatypes as datatypes
from coremltools.models.neural_network import NeuralNetworkBuilder
input_features = [('matrix', datatypes.Array(*(2,2))),
('starting_vector', datatypes.Array(*(2,)))]
output_features = [('maximum_eigen_value', datatypes.Array(*(1,))),
('eigen_vector', None),
('iteration_count', datatypes.Array(*(1,)))]
builder = NeuralNetworkBuilder(input_features, output_features, disable_rank5_shape_mapping=True)
# convert the starting_vector which has shape (2,) to shape (2,1)
# so that it can be used by the Batched-MatMul layer
builder.add_expand_dims('expand_dims', 'starting_vector', 'x', axes=[-1])
builder.add_load_constant_nd('iteration_count', 'iteration_count',
constant_value=np.zeros((1,)),
shape=(1,))
# start building the loop
loop_layer = builder.add_loop('loop', max_iterations=number_of_iterations)
# get the builder object for the "body" of the loop
loop_body_builder = NeuralNetworkBuilder(nn_spec=loop_layer.loop.bodyNetwork)
# matrix multiply
# input shapes: (n,n),(n,1)
# output shape: (n,1)
loop_body_builder.add_batched_mat_mul('bmm.1', input_names=['matrix','x'], output_name='y')
# normalize the vector
loop_body_builder.add_reduce_l2('reduce', input_name='y', output_name='norm', axes = 0)
loop_body_builder.add_divide_broadcastable('divide', ['y','norm'], 'y_normalized')
# find difference with previous, which is computed as (1 - abs(cosine diff))
loop_body_builder.add_batched_mat_mul('cosine', ['y_normalized', 'x'], 'cosine_diff', transpose_a=True)
loop_body_builder.add_unary('abs_cosine','cosine_diff','abs_cosine_diff', mode='abs')
loop_body_builder.add_activation('diff', non_linearity='LINEAR',
input_name='abs_cosine_diff',
output_name='diff', params=[-1,1])
# update iteration count
loop_body_builder.add_activation('iteration_count_add', non_linearity='LINEAR',
input_name='iteration_count',
output_name='iteration_count_plus_1', params=[1,1])
loop_body_builder.add_copy('iteration_count_update', 'iteration_count_plus_1', 'iteration_count')
# update 'x'
loop_body_builder.add_copy('update_x', 'y_normalized', 'x')
# add condition to break from the loop, if convergence criterion is met
loop_body_builder.add_less_than('cond', ['diff'], 'cond', alpha=convergence_tolerance)
branch_layer = loop_body_builder.add_branch('branch_layer', 'cond')
builder_ifbranch = NeuralNetworkBuilder(nn_spec=branch_layer.branch.ifBranch)
builder_ifbranch.add_loop_break('break')
# now we are out of the loop, compute the eigenvalue
builder.add_batched_mat_mul('bmm.2', input_names=['matrix','x'], output_name='x_right')
builder.add_batched_mat_mul('bmm.3', input_names=['x','x_right'], output_name='maximum_eigen_value', transpose_a=True)
builder.add_squeeze('squeeze', 'x', 'eigen_vector', squeeze_all=True)
spec = builder.spec
model = coremltools.models.MLModel(spec)
###Output
_____no_output_____
###Markdown
Okay, so now we have the mlmodel spec. Before we call predict on it, lets print it out to check whether everything looks okay. We use the utility called "print_network_spec"
###Code
from coremltools.models.neural_network.printer import print_network_spec
print_network_spec(spec, style='coding')
# call predict on CoreML model
input_dict = {}
input_dict['starting_vector'] = starting_vector
input_dict['matrix'] = A.astype(np.float)
output = model.predict(input_dict)
coreml_eigen_value = output['maximum_eigen_value']
coreml_eigen_vector = output['eigen_vector']
print('CoreML computed eigenvalue: %.4f' % coreml_eigen_value)
print('CoreML computed eigenvector: ', coreml_eigen_vector, coreml_eigen_vector.shape)
print('CoreML iteration count: %d' % output['iteration_count'])
###Output
CoreML computed eigenvalue: 8.5249
('CoreML computed eigenvector: ', array([-0.74152416, 0.67092603]), (2,))
CoreML iteration count: 9
###Markdown
Indeed the output matches with our python program. Although, we do not do it here, the parameters "convergence_tolerance" and "number_of_iterations" can be made as network inputs, so that their value can be modifed at runtime. Currently, the input shapes to the Core ML model are fixed, $(2, 2)$ for the matrix and $(2,)$ for the starting vector. However, we can add shape flexibility so that the same mlmodel can be run on different input sizes. There are two ways to specify shape flexibility, either through "ranges" or via a list of "enumerated" shapes. Here we specify the latter.
###Code
from coremltools.models.neural_network import flexible_shape_utils
# (2,2) has already been provided as the default shape for "matrix"
# during initialization of the builder,
# here we add two more shapes that will be allowed at runtime
flexible_shape_utils.add_multiarray_ndshape_enumeration(spec,
feature_name='matrix',
enumerated_shapes=[(3,3), (4,4)])
# (2,) has already been provided as the default shape for "matrix"
# during initialization of the builder,
# here we add two more shapes that will be allowed at runtime
flexible_shape_utils.add_multiarray_ndshape_enumeration(spec,
feature_name='starting_vector',
enumerated_shapes=[(3,), (4,)])
model = coremltools.models.MLModel(spec)
# lets run the model with a (3,3) matrix
A = np.array([[1, -6, 8], [-6, 1, 5], [8, 5, 1]], dtype=np.float)
starting_vector = np.random.rand(3)
starting_vector = starting_vector / np.sqrt(np.sum(starting_vector**2)) ## normalize it
eigen_value, eigen_vector = power_iteration(A, starting_vector)
print('python code: largest eigenvalue: %.4f ' % eigen_value)
print('python code: corresponding eigenvector: ', eigen_vector)
from numpy import linalg as LA
e, v = LA.eig(A)
idx = np.argmax(abs(e))
print('numpy linalg: largest eigenvalue: %.4f ' % e[idx])
print('numpy linalg: first eigenvector: ', v[:,idx])
input_dict['starting_vector'] = starting_vector
input_dict['matrix'] = A.astype(np.float)
output = model.predict(input_dict)
coreml_eigen_value = output['maximum_eigen_value']
coreml_eigen_vector = output['eigen_vector']
print('CoreML computed eigenvalue: %.4f' % coreml_eigen_value)
print('CoreML computed eigenvector: ', coreml_eigen_vector, coreml_eigen_vector.shape)
print('CoreML iteration count: %d' % output['iteration_count'])
###Output
CoreML computed eigenvalue: -11.7530
('CoreML computed eigenvector: ', array([ 0.61622757, 0.52125645, -0.59038568]), (3,))
CoreML iteration count: 30
|
Course4 - Convolutional Neural Networks/__empty_notebook/Autonomous+driving+application+-+Car+detection+-+v1.ipynb | ###Markdown
Autonomous driving - Car detectionWelcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242). **You will learn to**:- Use object detection on a car detection dataset- Deal with bounding boxesRun the following cell to load the packages and dependencies that are going to be useful for your journey!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`. 1 - Problem StatementYou are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like. **Figure 1** : **Definition of a box** If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step. In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use. 2 - YOLO YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model detailsFirst things to know:- The **input** is a batch of images of shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).Lets look in greater detail at what this encoding represents. **Figure 2** : **Encoding architecture for YOLO** If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425). **Figure 3** : **Flattening the last two last dimensions** Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class. **Figure 4** : **Find the class detected by each box** Here's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes). - Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: **Figure 5** : Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: **Figure 6** : Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps: - Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a threshold on class scoresYou are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.**Exercise**: Implement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator: ```pythona = np.random.randn(19*19, 5, 1)b = np.random.randn(19*19, 5, 80)c = a * b shape of c will be (19*19, 5, 80)```2. For each box, find: - the index of the class with the maximum box score ([Hint](https://keras.io/backend/argmax)) (Be careful with what axis you choose; consider using axis=-1) - the corresponding box score ([Hint](https://keras.io/backend/max)) (Be careful with what axis you choose; consider using axis=-1)3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep. 4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))Reminder: to call a Keras function, you should use `K.function(...)`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = None
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = None
box_class_scores = None
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = None
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = None
boxes = None
classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
###Output
_____no_output_____
###Markdown
**Expected Output**: **scores[2]** 10.7506 **boxes[2]** [ 8.42653275 3.27136683 -0.5313437 -4.94137383] **classes[2]** 7 **scores.shape** (?,) **boxes.shape** (?, 4) **classes.shape** (?,) 2.3 - Non-max suppression Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). **Figure 7** : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. **Figure 8** : Definition of "Intersection over Union". **Exercise**: Implement iou(). Some hints:- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that: - xi1 = maximum of the x1 coordinates of the two boxes - yi1 = maximum of the y1 coordinates of the two boxes - xi2 = minimum of the x2 coordinates of the two boxes - yi2 = minimum of the y2 coordinates of the two boxes In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = None
yi1 = None
xi2 = None
yi2 = None
inter_area = None
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = None
box2_area = None
union_area = None
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = None
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
###Output
_____no_output_____
###Markdown
**Expected Output**: **iou = ** 0.14285714285714285 You are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = None
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = None
boxes = None
classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
_____no_output_____
###Markdown
**Expected Output**: **scores[2]** 6.9384 **boxes[2]** [-5.299932 3.13798141 4.45036697 0.95942086] **classes[2]** -2.24527 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) 2.4 Wrapping up the filteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. **Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; we'll show you where they need to be called.
###Code
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = None
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = None
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = None
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
###Output
_____no_output_____
###Markdown
**Expected Output**: **scores[2]** 138.791 **boxes[2]** [ 1292.32971191 -278.52166748 3876.98925781 -835.56494141] **classes[2]** 54 **scores.shape** (10,) **boxes.shape** (10, 4) **classes.shape** (10,) **Summary for YOLO**:- Input image (608, 608, 3)- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output. - After flattening the last two dimensions, the output is a volume of shape (19, 19, 425): - Each cell in a 19x19 grid over the input image gives 425 numbers. - 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture. - 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect- You then select only few boxes based on: - Score-thresholding: throw away boxes that have detected a class with a score less than the threshold - Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes- This gives you YOLO's final output. 3 - Test YOLO pretrained model on images In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
###Code
sess = K.get_session()
###Output
_____no_output_____
###Markdown
3.1 - Defining classes, anchors and image shape. Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell. The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
###Output
_____no_output_____
###Markdown
3.2 - Loading a pretrained modelTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/yolo.h5")
###Output
_____no_output_____
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
###Code
yolo_model.summary()
###Output
_____no_output_____
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert output of the model to usable bounding box tensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
###Code
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
###Output
_____no_output_____
###Markdown
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function. 3.4 - Filtering boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
###Code
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
###Output
_____no_output_____
###Markdown
3.5 - Run the graph on an imageLet the fun begin. You have created a (`sess`) graph that can be summarized as follows:1. yolo_model.input is given to `yolo_model`. The model is used to compute the output yolo_model.output 2. yolo_model.output is processed by `yolo_head`. It gives you yolo_outputs 3. yolo_outputs goes through a filtering function, `yolo_eval`. It outputs your predictions: scores, boxes, classes **Exercise**: Implement predict() which runs the graph to test YOLO on an image.You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.The code below also uses the following function:```pythonimage, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))```which outputs:- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.- image_data: a numpy-array representing the image. This will be the input to the CNN.**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
###Code
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = None
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
###Output
_____no_output_____ |